title stringclasses 114 values | description stringlengths 71 138 | essay stringlengths 412 63.4k | authors stringlengths 7 67 | source_url stringlengths 52 104 | thumbnail_url stringlengths 113 249 |
|---|---|---|---|---|---|
Demography and migration | Should nations stay within their historical boundaries, or change as their populations do? Kosovo is a cautionary tale | There are only two questions in politics: who decides? and who decides who decides? Every country solves these questions in its own way, be it through democracy, autocracy or dictatorship. But however it answers, the same dilemma emerges again at a deeper level. Who gets to say what is or is not a country? For most of human history, nation states as we now recognise them did not exist. Territories were controlled by powerful local people, who in turn pledged allegiance to distant authorities, favouring whichever one their circumstances suited. In Europe, the tensions in this system eventually led to the Thirty Years’ War, which killed eight million people and only ended in 1648 with a thorough revision of the relationship between land, people and power. The resulting set of treaties, known as the Peace of Westphalia, introduced two novel ideas: sovereignty and territorial integrity. Kings and queens had ‘their’ people and associated territory; beyond their own borders, they should not meddle. Under this new dispensation, the modern state emerged as an entity in itself, distinct from its ruler. A dense forest of international laws and procedures grew up. At the heart of this forest, two principles circle each other like tigers. The first is self-determination: the idea that an identified ‘people’ has the right to run its own affairs within its own state. The other is territorial integrity: the notion that the borders of an existing state should be difficult to change. Self-determination is dynamic or turbulent: it is all about overturning political conditions deemed to run against the will of the local inhabitants. Territorial integrity is pragmatic and cautious: better the devil you know. States have a keen instinct for self-preservation; they don’t want their existing borders to be challenged except under tightly controlled conditions, so it is not easy for self-determination to trump territorial integrity in any given case: where might such a precedent lead? Nevertheless, the surge of events sometimes means that existing borders no longer work. When such a situation arises, the best way forward is general agreement between powerful local leaders and the wider international community that a new order is needed. The result, in the best case, is new borders that all other states recognise. The convulsive collapse of European communism led to 15 new states emerging more or less peacefully from the former Soviet Union. Czechoslovakia likewise divided politely into two new states. Political pressures within a state may force its leaders to face up to the possibility of new borders. A wise modern democracy makes provision for this. In Canada, the narrow Quebec ‘no’ vote in its 1995 referendum stopped the separatist Québécois movement in its tracks. In a year’s time, Scotland will have a referendum on independence: the rest of the United Kingdom does not dispute the right of Scots to form a new state within Scotland’s traditional territory. This has (so far) provided the wider world with a model for how such sensitive, divisive issues ought to be tackled. If those eccentric Britishers decide to get divorced, why should anyone else object? When there is no clear way peacefully to decide who decides, things can turn horribly violent The problems get much trickier when a linguistic or cultural community straddles existing borders. The challenge to territorial integrity then affects more than one state, and even significant communities with a credible claim to self-determination lose out. Spain and France together keep the lid on Basque demands for independence. The aspirations of 30 million Kurds to live in a country of their own would require Turkey, Iraq, Iran, Armenia, Syria and possibly Azerbaijan all to agree to reduce their territories, which, of course, is unlikely ever to happen. When there is no clear way peacefully to decide who decides, things can turn horribly violent. The separation of Bangladesh from Pakistan in 1971 claimed up to a million lives. Hundreds of thousands of people died when Biafra tried to break from Nigeria in the 1960s. More recently, Moscow has sent troops to flatten Grozny and kill thousands of Russian citizens, all to prevent Chechnya from breaking away as a new state. South Sudan became the newest member of the United Nations following a popular referendum in 2011 that was intended to end a long and ruinous civil war. Intense fighting does not necessarily lead to a clear-cut outcome. Ambiguous ceasefires can drag on indefinitely. Taiwan and its 23 million inhabitants live in a curious twilight zone of international law, recognised by only 22 smaller countries and the Vatican. Cyprus has been divided since 1974. Nagorno-Karabakh and Transdnistria are two forlornly ‘frozen conflicts’ within the former Soviet Union, would-be siblings of that great litter of ex-Soviet states. And then there is Kosovo, one of the toughest nuts of contemporary international relations, as I discovered when I started work in the British Embassy in Belgrade in 1981. Located on a plateau between Montenegro, Albania, Macedonia and Serbia, Kosovo covers some 4,000 square miles — less than half the size of the New York metropolitan area. As of August this year, 101 UN member states — a majority of the world’s countries led by the US, Japan, Nigeria and most European Union members — have recognised it as an independent state. Even so, it cannot join the United Nations, because 92 UN member states do not recognise its independence. These holdouts represent a clear majority of the world’s population and include such global heavy-hitters as China, India, Russia and Brazil, plus four members of the EU. The deadlock is not complete: Egypt, the largest Arab country, declared its support in June this year. But with only five new recognitions so far in 2013, after a modest 12 in 2012, there is no prospect of Kosovo joining the UN in the foreseeable future. How did such a stalemate arise? To understand that, we need to appreciate the haphazard character of Balkan history. Yugoslavia was cobbled together from some of the remnants of the Austro-Hungarian and Ottoman empires following the First World War. The new nation brought together several different ethno-linguistic communities in one state. Reflecting and exacerbating these divisions, the Second World War saw Yugoslavs killing Yugoslavs on a fearsome scale. In the rush for power that followed, Josip Broz Tito and his communists emerged on top. Tito invented a constitutional solution for Yugoslavia’s woes. He created six republics within local borders that had some historic resonance: Slovenia, Croatia, Serbia, Bosnia & Herzegovina, Macedonia and Montenegro. Within Serbia, two ‘autonomous provinces’ were set up: Kosovo and Vojvodina. In the later years of Yugoslavia’s history the state presidency was an eccentric eight-person job-share among the six republics and the two autonomous provinces: Kosovo and Vojvodina had their own voices at the highest national level. This system hung together right up until Tito’s death in 1980. Then, without the great dictator around to impose control, no one could decide what the country actually was — a single, centralised state or an exotic federation. It’s not easy to make a country work when half its population reject the basis for its existence During the 1980s, while communism was collapsing across Europe, Yugoslavia’s contradictions became unbearable. Two republics — Slovenia and Croatia — played the self-determination card and broke away from the Yugoslav framework. An indignant Serbia, now led by Slobodan Milošević, deployed the Yugoslav National Army against them. Milošević argued that it was wrong to divide the country along the borders of its internal republics, borders that had (he claimed) been drawn up to disadvantage Serbs; why should Serbs living in Croatia or Bosnia suddenly find themselves minorities in these new states? Milošević had some good points, but his willingness to use violence against his neighbouring republics repulsed those Western nations that might have accepted his logic. In any case, the international community had no appetite for rummaging around in the region’s messy history to try to negotiate new borders that gave enough self-determination to every Balkan community. Where to start? How to get eggs from an omelette? Plumping for what looked like the easier option, western governments joined with post-communist Moscow to work out a plan based on Yugoslavia’s existing internal republics. Slovenia made sense as a new independent state: Slovene-speakers predominated and no one seriously disputed their historic territory. Elsewhere the situation was much less clear. The large Serb communities in both Croatia and Bosnia & Herzegovina wanted to stay within a national framework that included Serbia, and they had active military support from Milošević’s Belgrade. Conflict erupted, the worst of it between Bosnia’s Muslims, Serbs and Croats. The Bosnian conflict ran from 1992 to 1995, ending with the Dayton peace accords: if the warring Bosnians could not decide an outcome for themselves, the world would do it for them. The US negotiator Richard Holbrooke forced the agreement through with European and Russian support and the close involvement of the leaders of Croatia and Serbia, thereby achieving full international support. The new Bosnian constitution was a hotchpotch of ethno-territorial compromises. The state’s borders were determined by the hotly contested territorial integrity of an internal Yugoslav republic, while self-determination claims for Bosnia’s Croat and Serb communities were simply ignored. It’s not easy to make a country work when half its population reject the basis for its existence. Eighteen years later and despite colossal financial support from the US and EU, Bosnia barely functions as a modern state. While the Dayton accords were being pushed through, Kosovo was part of a much reduced ‘Federal Republic of Yugoslavia’ that comprised only two of the original six Yugoslav republics, Serbia and Montenegro. The time had come, it seemed to the Kosovars, Kosovo’s Albanian-speaking community, to break from Belgrade’s control. In 1996 a self-styled Kosovo Liberation Army began attacking police stations. Belgrade hit back, and tens of thousands of Serbia’s own Albanian-speaking citizens fled their homes. NATO bombed Serbian targets, trying to make Milošević abandon his heavy-handed military operations against Kosovan targets. In June 1999, UN Security Council Resolution 1244 put Kosovo under UN supervision and started a political process to decide its status. Serbian military forces left Kosovo. After years of diplomatic machinations — and, this time, openly angry disagreements between Western capitals and Belgrade and Moscow — Kosovo proclaimed itself independent in February 2008. Today, the fighting has stopped, but the struggle has moved to a different level — a diplomatic battle over Kosovo’s status. Kosovo asserts its right to self-determination amid the wreckage from the collapse of Yugoslavia. Serbia insists that its territorial integrity has to be respected. There is one fact in Kosovo that everyone accepts: Kosovars heavily outnumber Serbs. After the Second World War, the Yugoslav authorities’ attempts to improve the lot of the country’s Albanian-speaking population led to a sharp drop in child mortality without a correspondingly speedy reduction in family size. A boom in Kosovar numbers now ripples down the decades. Each year some 15,000 more Kosovars are born than die. Each year in Serbia some 33,000 more Serbs die than are born. The numbers currently are roughly 50,000 a year in Kosovo’s favour. These discrepancies don’t sound large, but they mount up. Less than a generation from now the population difference between Serbia (currently some seven million) and Kosovo (currently approaching two million) is likely to shrink by nearly one million people. You see the difference. Kosovo is scruffy bustle, full of young people running kiosks and workshops or just moving around. Cross into Serbia and the scene abruptly empties out: lush meadows and tidy little farms, but eerily few humans. Sooner or later Kosovars will be trying to make lives for themselves across that border. A de facto ‘greater Albanian space’ will seep outwards into Serbia. For purely demographic reasons, Serbia’s position is doomed to fail. In today’s world, numbers count. And the Kosovars have the numbers on their side For decades successive leaderships in Belgrade have watched this demographic drama unfold. Dobrica Ćosić, the Serb nationalist writer who later became president of the rump Federal Republic of Yugoslavia, feared that these demographic forces would leave Serbs outnumbered not just in Kosovo but in Serbia as a whole. He told me in 1983 that Kosovo had to be cut off to prevent greater losses: ‘Sometimes you amputate the cancerous leg to save the body.’ No Serbian leader has dared take on this grisly task. Both Serbs and Kosovars insist that, on any fair view of the problem, Kosovo ‘belongs’ to their side. They cite all sorts of historic and other evidence to support these mutually incompatible claims. Serb experts agree that the ethnic balance of the Kosovo population has ebbed and flowed down the centuries. Nevertheless, ancient Serb monasteries and other sites attest to Serbia’s historic claim to the land. Serbs insist that it is not fair to accept today’s heavy Albanian-speaking majority as a decisive argument in favour of self-determination, because that advantage is artificial: for decades, external force and communist-era manipulations have whittled down Kosovo’s Serbian population, tilting the numbers in the Albanian-speakers’ favour. They add that NATO’s intervention in the late-1990s led to tens of thousands more Serbs leaving, and that the UN Administration then failed to set up a comprehensive ‘returns’ programme like the one that featured so prominently in post-conflict Bosnia. In Bosnia, the Serbs point out, Holbrooke’s Dayton settlement gave the three rival communities a blunt message: Stop fighting! Get along with each other nicely, in a single state framework. Allow people back to their homes! Yet just down the road in Kosovo, the American-led policy is the opposite: If you Kosovars want to leave a democratic Serbia, who are we to stop you? Indeed, here’s plenty of our taxpayers’ money, with few strings attached! Returning Serbs to their homes in Kosovo? Not a priority. Asked to explain these inconsistencies, Western politicians study their shoes. Kosovar experts coolly reply that Kosovo’s Albanian-speakers have plenty of their own historic claims to territory in this part of Europe. More importantly, the Serbs need to accept that their own sustained bad policies have brought about their downfall. For decades if not centuries, Serbs have looked on Kosovars as inferiors: ‘The snow’s deep — get a Šiptar to sweep it,’ they say, where Šiptar is an ethnic slur on Albanians. As one grim episode after another has shown, Serb leaders in Belgrade have defaulted to violence when dealing with legitimate Kosovar demands. In the 1990s Milošević drove hundreds of thousands of Kosovars — Serbia’s own citizens — out of the territory. Kosovars believe that the Serbian dream is an idealised Kosovo without them, an ambition that is as ignoble as it is unachievable, except by means of so-called ‘ethnic cleansing’. Given all this, say the Kosovars, how can we be expected to live under Serb rule? Kosovo had the substantive status of a republic in communist Yugoslavia: now the overwhelming majority of Kosovo’s people want to follow all the other Yugoslav republics that have broken free from what they see as Serbia’s malign and lugubrious nationalism. More than half the countries in the world see the political logic and moral justice in this position. For purely demographic reasons, Serbia’s position is doomed to fail. In today’s world, numbers count. And the Kosovars have the numbers on their side. States either win global recognition and a flag at the United Nations, or they don’t. Kosovo is either part of a modern Serbia, or it isn’t. Once Washington, London and other capitals decided to recognise Kosovo within its Yugo-era borders — in the face of strong opposition from Moscow, Beijing and many other important power-centres — all options for a single-state solution were lost. We could have reached into the European and global bran-tubs of precedent to find something that might work: Swiss-style cantonisation, or EU-supported power-sharing, or new ‘entities’ echoing the Dayton deal in Bosnia, or far-reaching autonomy such as Greenland enjoys under Denmark. Generous financial assistance and technical support could have launched the new deal. Had the two sides still shown implacable unwillingness to live under one flag, we could have accepted this reality with a dash of pragmatism (usually the wisest approach) and proposed a deal that traded territorial integrity for self-determination. Kosovo would get its full independence — including recognition from Belgrade — only if some of its Serb communities were offered the option to stay in Serbia. Meanwhile, the Preševo Valley community of Albanian-speakers in Serbia might be invited to choose to join Kosovo. Both sides would be expected to make strategic compromises, with internationally supervised border adjustments reflecting the democratic wishes of the different local communities. The rest of the world would have nodded at this good sense and waited to endorse any deal that emerged. In recent years, Serbia has floated such ideas and more. They have all met with EU and US disdain as cheap tricks to promote nasty ‘mono-ethnicity’ – as if Kosovo’s independence were not itself all about self-determination for a largely mono-ethnic Kosovo. Kosovars have no reason not to play for the maximum, and neither Europeans nor Americans use their huge leverage to challenge them. Serbia therefore falls back on Belgrade’s traditionally good ties with other key world capitals. It presses the attractive argument that, these days, it’s a wise move to see what Washington and Brussels want and to do the opposite. Thus today’s diplomatic stalemate that divides the planet. Kosovo in effect vetoes Serbia’s European Union bid. Serbia makes it clear that, without its blessing, Kosovo won’t join the EU or the UN. This deadlock over territory and allegiance is one that the wily princes, dukes and bishops drafting the Peace of Westphalia would easily recognise. No diplomat can be surprised that so many capitals round the world refuse to follow the clumsy lead of the US, London and Brussels on this issue. The vast majority of the countries that have not recognised Kosovo don’t care about Kosovo or Serbia. For these countries it’s not about Balkan bickering — it’s about their own security. Yes, some minority communities want to run their own affairs. But territorial integrity underpins the way the whole world works: grave dangers come from trashing that fundamental principle in the face of serious international objections. It’s one thing to amputate parts of your gangrenous leg yourself. It’s quite another for NATO to lunge in, wielding a rusty hacksaw. Good grief, who might be next? Syria? | Charles Crawford | https://aeon.co//essays/who-gets-to-say-what-counts-as-a-country | |
Family life | My father rescued me time after time when I was lost in addiction. What kind of father will I be for my new daughter? | I was cruised the other night at a park near my house in Denver, Colorado. It’s a normal park with tennis courts and a lake and a playground and Mexican families throwing fun-looking picnics on the grassy hills. I was pushing my four-month-old daughter around the lake when I noticed a man following me. He was probably in his mid-30s with skinny everything, and not really attractive, but not necessarily ugly. I thought it was strange, the way he kept crossing my path, the way he let his eyes linger, the way he smiled like he was feeling everything I was putting down (admittedly, I was wearing five-inch inseam running shorts, so I probably wasn’t completely blameless). He eventually spoke one word to me — Hey — and I said hello back. But then I shook my head and he understood I wasn’t looking to trade pleasures. I chalked the whole thing up to me still having it, even being 30 and slightly fat and a new father. I walk my daughter around this park every night. The entire loop takes about 35 minutes. She laughs and coos and sleeps and cries. You can hear dogs barking and a rollercoaster from the nearly condemned amusement park and mariachi spewing from boom boxes. And when you walk by the lockable bathrooms you can hear the soft grunts of the men inside. This is a sound that I once would’ve smirked at, thought Good for you guys, go get it; but the other night, being harmlessly trailed while pushing my daughter, it filled me with some sort of quiet panic. My daughter was born to beeping monitors indicating a plummeting heart rate and a doctor losing his cool, saying We need to get this baby out now. At one point, everything started to get all nitrous oxide-y inside of my head with Wha-wha echoes and diamonds in my peripheral. A nurse rushed me to a chair. My wife pushed. There was blood everywhere. I thought about everything we’d gone through — nine months of fear and preparation and excitement, and watching American Horror Story in the middle of the night because my wife couldn’t sleep, and the pink and green letters I’d hung in the nursery that spelled out our unborn child’s name — all that for naught, I thought, as I imagined her never taking a breath. Then I looked at my wife and realised I needed to step up and quit being so pathetic. And then I was there by my wife. I saw a head. A back. And I was praying to a god I’d given up years before — I’ll quit masturbating and tobacco, I’ll volunteer and never complain, and I’ll be a better son and husband and employee, and please, just fucking please, let her be OK. The doctor placed our daughter on my wife’s chest. My daughter screamed like a pterodactyl. Her eyes were open, and though she couldn’t really see me, it felt that way, her all shivering and beautiful, staring at her father. Back when I was 16, growing up in the suburbs of St. Paul, Minnesota, I felt oppressed by my rich parents and my prep school and seven acres alongside a golf course (such a moron, I know), and ran away with my college-aged girlfriend to New Orleans, for the 10-day Jazz Fest. I was addicted to meth. We drove in the Toyota my father had given me for my birthday. After 1,000 miles on the road, I eventually answered my cell phone. My parents yelled and screamed, and then pleaded. My father just kept saying Please. I told him I wasn’t coming home, not until after Jazz Fest — then I’d go to rehab. Please, son, please. I told him it just felt right, to trust me, to let me be my own person. Please. For some reason, I thought about the time me and my father had gone to our cabin in Northern Minnesota. Just the two of us. I was 13 — that age when all I wanted to be doing was calling girls on the phone — but somehow, I quit caring about my invigorating social calendar and enjoyed myself. My father taught me to whittle, which, up until that point, had been his thing done in the basement after I’d gone to bed. That weekend, I made a retarded-looking gnome. My father kept telling me how well I was doing, how I was a natural. Flakes of basswood covered our socks. He kept telling me he was proud. Please, son, please. I hung up the phone. There is a system at the park, involving a parking lot next to the bathrooms. Cars that come in and park with the hoods facing the hill are waiting for somebody specific. Cars that come in and back into the space are looking for anyone. These men aren’t attractive. Most of them are on the saggy-jowls side of old, and fat. They get out of their respective cars and stretch. Then one will saunter up the embankment to the stone outhouse. Ten or 15 seconds later, a second will follow. I know these men are lonely and that there’s nothing wrong with bumping uglies, randomly or prearranged. But that recent night I let my mind choke on the irrational fear I’ve consumed from television. I’m picturing sex offenders and rapists and men who cut screens to the row of houses across the street. I’m imagining them stealing my daughter, who has just recently discovered her own feet and can’t get enough of their taste. I hate myself for this fear, this prejudice: this Fox News mindset of equating sex and threat. For getting old. For the traction-gaining idea that we need to move outside of the city to a place with a fence and a lawn and, for sure, a security system. I hate myself for becoming my father. Which was more than apparent the other weekend. My wife and I were in the mountains for our anniversary. We brought our daughter, which was great, but also a far cry from the vacations we’d enjoyed over the previous 10 years. So eight o’clock rolled around, and we were spent, the curtains drawn, us in bed. We rented Harmony Korine’s teen road movie Spring Breakers (2012). My wife fell asleep in a matter of seconds. I held my daughter. This should’ve been a movie I loved — James Franco and a slutty Selena Gomez, and sex and youth and drugs and mindless violence, all layered over the notion of there being something inherently pointless to our American way of life. Yet I didn’t enjoy it, not in the slightest, not with that feeling of panic rising in my chest. I thought about my daughter being one of these girls. I thought about the boys she’ll fall for and the drugs she’ll ingest and the crazy things she’ll do out of paternal resentment and the times she’ll run away and tell me it just feels right. My father stood there all small and speckled gray. He grabbed me by the back of the head and pulled me close Only six months ago, I had envisioned parenthood like this: we’d be hip and we’d live in the city and we’d walk around as a young family drinking coffee and eating out. We wouldn’t dress our daughter in too much pink because we weren’t those kind of people. And there’d be no forced photographs. No books read about how to raise a baby, and no fights between my wife and me about parenting tactics. It’d be our lives as we knew it, only better with the accessory of a child. It took me all of one night with my daughter to realise that shit was about me, not her. A year after Jazz Fest, New Orleans, after two stints in rehab and being kicked out of high school and having made the trade from uppers to downers, and you have me at 17, once again run away, this time to San Francisco. I was still with the same girlfriend, only now she was sucking off this dude named Twig. I was a mess. I was seeing my self-proclaimed spirit animal (a clichéd crow) everywhere. I had broken into an Econo Lodge and climbed through the window, stripped down naked and cut a huge slice across my abdomen — attempting to free my soul — before lying down in a pool of blood and watching a documentary about Jesus. Eventually, my girlfriend called my parents. I didn’t know this at the time. My father flew across the country and showed up in the middle of the night. I remember a banging on the door. I crawled out of bed completely naked. My stomach was stained red, as were the white sheets. My father stood there all small and speckled gray. He grabbed me by the back of the head and pulled me close, his own forehead pressing to mine. He said, Jesus Christ son. When I think about parenthood now, I realise it has absolutely nothing to do with me. I need to get over my stigmas about self-gentrification. To know that if the schools are unequivocally better and safer in the suburbs surrounding Denver, we should just sack up and move there. I need to realise that part of parenthood is giving up certain notions about myself. To understand that getting older and being a father means making the choices I best believe will benefit my daughter. To allow my father’s protective tendencies to become my own. I don’t have to start wearing polo shirts tucked into my pleated khakis; but putting money aside for my daughter instead of getting a new tattoo is probably wise. But I also need to remind myself that a park full of dudes hooking up and guys smoking rock — all shit I’ve done — doesn’t equate to slit screens and a kidnapped daughter. To realise that I can do everything in my power to protect her, as my parents did me, but eventually she’ll need to understand the world and its ugly truths, before seeing its flawed beauty. My job is to prepare her for these experiences. To plead over the phone when she tells me This just feels right. To pray to that same abandoned god that she’ll eventually ask for my help. Even more, when I think about parenthood now, I think about the wishes I have for my daughter. All I want is for her is to know she’s loved. To never feel fat or afraid or less than the nasty kid with perfect teeth in her fourth grade class. I want her to be healthy. I want her to know she can come to me with absolutely anything. I want her to pursue whatever the hell excites her. I want her to have every opportunity I did — loving parents who’d fly across the country to save their child from any horrible situation, as well as a lawn with a dog, a good school, and summers spent riding bikes without seeing dudes hook up or crack rocks being smoked from thin glass tubes that once held a miniature paper rose. I want to protect her from men like James Franco in Spring Breakers. From men like I used to be. I want her never to drink or smoke, and if she does, at least never to move on to narcotics. I want her to know that I’d do anything in this world to protect her from certain things I’ve experienced. And I want for there to be an unceremonious weekend spent at a cabin where I teach her how to woodcarve, the memory of which eventually becomes ingrained in her mind as synonymous with paternal love. I want her to know she is the most precious thing in my life, and to not have this knowledge weigh too heavily on her chubby shoulders. | Peter Stenson | https://aeon.co//essays/i-never-imagined-id-turn-into-my-dad-but-perhaps-i-need-to | |
Biotechnology | It’s made in a lab, no factory farms and no killing, but it’s still meat. Looks like we’ll need a whole new food ethics | The chef Richard McGeown has faced bigger culinary challenges in his distinguished career than frying a meat patty in a little sunflower oil and butter. But this time the eyes and cameras of hundreds of journalists in the room were fixed on the 5oz (140g) pink disc sizzling in his pan, one that had been five years and €250,000 in the making. This was the world’s first proper portion of cultured meat, a beef burger created by Mark Post, professor of physiology, and his team at Maastricht University in the Netherlands. Post (which rhymes with ‘lost’, not ‘ghost’) has been working on in vitro meat (IVM) since 2009. On 5 August this year he presented his cultured beef burger to the world as a ‘proof of concept’. Having shown that the technology works, Post believes that in a decade or so we could see commercial production of meat that has been grown in a lab rather than reared and slaughtered. The comforting illusion that supermarket trays of plastic-wrapped steaks are not pieces of dead animal might become a discomforting reality. The IVM technique starts with a harmless procedure to remove myosatellite cells — stem cells that can only become muscle cells — from a live cow’s shoulder. They are then placed in a nutrient solution to create muscle tissue, which in turn forms tiny muscle fibres. Post’s burger contained 40 billion such cells, arranged in 20,000 muscle fibres. Add a few breadcrumbs and egg powder as binders, plus some beetroot juice and saffron to give it a redder colour, and you have your burger. I was at the suitably theatrical setting of the Riverside Studios in west London to see the synthetic burger unveiled. The TV presenter Nina Hossain was hired to provide a dose of professionalism and glamour to what was in effect a live TV show, filmed by a substantial crew for instantaneous webcast. When the lights dimmed, images of gulls flying over gentle sea waves were projected onto two screens by the sides of the stage. Over some sparse, slow, rising guitar chords, Sergey Brin, the co-founder of Google and a donor of €700,000 to Post’s research, uttered the portentous words: ‘Sometimes a new technology comes along and it has the capability to transform how we view our world.’ He was right. Never before has a human eaten meat without harming or killing an animal. But in a strange way the slick presentation detracted from the truly historic nature of the moment. A scientific landmark was sold to us in the manner of a glitzy product launch, a piece of corporate puff. What was most striking to me was how the presentation led, not with science, but with ethics. As the introductory film continued, the scientist Ken Cook, founder of the US public health advocacy organisation Environmental Working Group, pointed out that ‘70 per cent of the antibiotics used in the United States now are not used on people, they’re used on animals in agriculture, because we keep them in such inhumane, overcrowded conditions’. He then reeled off a list of UN-backed statistics: ‘18 per cent of our greenhouse gas emissions come from meat production. We’re also using something like 1,500 gallons of water to produce just one pound of meat. Meat takes up about 70 per cent of our arable lands.’ Some might be surprised by this alliance: advocates of the most radical technological fix to our food supply agreeing with the critics of the contemporary industrialised food system — especially since the production of cultured meat tackles the problem by taking an even bolder technological leap forwards, rather than a step back to an older style of agriculture. Dr Frankenstein is not supposed to also be an environmentalist who agrees that we must reduce land and water use, as well as synthetic inputs such as pesticides, fungicides, fertilisers, many of which depend more or less directly on oil for their production. The idea that IVM might have a part to play in a cleaner, fairer food system runs counter to a central idea put forward by many critics of industrial agriculture: that farming needs to be based more on traditional, natural, biological and ecological systems not artificial mono-cultures. Surely in vitro meat would be the most artificial mono-culture of them all. Professor Mark Post of Maastricht University presents his ‘cultured beef’ burger. Photo by David Parry/PAThe belief that we have to choose between a food system that is over-dependent on technology and one that is more in harmony with nature rests on the assumption that there is a neat moral and conceptual contrast between ‘natural’ and ‘artificial’, and that this lines up neatly with the distinction between ‘good’ and ‘bad’. If IVM is the greenest, most animal-friendly meat, yet it is even more artificial than a pitiful, intensively reared broiler chicken, then no one can maintain the fantasy that bucolic nature has a monopoly on good, ethical food. For those who have campaigned for a more ethical and sustainable food system, IVM is a good test of where their values really lie: with hard-nosed ethics or soft-focus sentiment. After all, it is hard for anyone concerned about the environment or animal welfare to disagree with Post’s claim that ‘from an ethical view [IVM] can have only benefits’. Cultured meat has the potential to replace lame, belching, farting, grain-guzzling, confined beasts with clean, safe, sustainable meat, direct from the factory floor. Faced with this unsettling truth, how have greens and animal rights campaigners responded to Post’s synthetic burger? The environmental movement has generally been quietly, cautiously and moderately supportive, with chief executives leaving it to more junior colleagues to make low-key comments. Friends of the Earth International issued a short statement without fanfare in which its food campaigner Kirtana Chandrasekaran said: ‘It’s really positive that people are talking about alternatives to meat,’ but stressed that commercial IVM is a long way off and that ‘we can reduce our meat-eating now’. Similarly, Emma Hockridge, head of policy at the UK’s Soil Association, conceded that ‘this new technology is interesting’, but emphasised how ‘there are also many simpler solutions to feeding our growing population available to us now’. Greenpeace is more sceptical, issuing no statement on any of its global websites but telling the San Francisco Chronicle: ‘Synthetic meat distracts agricultural research and funding away from ecological farming, the real solution to the disastrous livestock model that causes environmental and socioeconomic crises and does not meet the dietary needs of the global South.’ While IVM remains a somewhat speculative and fringe issue, it is understandable that environmentalists remain cool about it, believing they have bigger fish to conserve. The animal welfare world, on the other hand, has responded more vocally and appears deeply divided on the issue. People for the Ethical Treatment of Animals (PETA) has bravely and unequivocally come out in support of IVM. ‘In vitro technology will spell the end of lorries full of cows and chickens, abattoirs and factory farming,’ says its most recent statement on the issue. The UK’s Compassion in World Farming also gives Post’s breakthrough a cheer. ‘This could be a real game-changer,’ says its chief executive Philip Lymbery, ‘transforming the way meat is produced in ways which potentially come with great environmental, health and animal welfare benefits.’ Perhaps most strikingly, the philosopher Peter Singer, whose book Animal Liberation (1975) was a founding text for the modern movement, wrote in The Guardian: ‘I haven’t eaten meat for 40 years, but if in vitro meat becomes commercially available, I will be pleased to try it.’ The Vegan Society in the UK, however, worries that IVM will promote demand for meat and the stigmatisation of vegans by ‘perpetuating a myth that meat is and will always be intrinsically desirable’. It’s hard to accept that sometimes two things we highly prize, such as animal welfare and conservation, can pull in different directions The UK’s Vegetarian Society is more circumspect, but equally unwilling to welcome the developments. ‘Why go to this much trouble and expense to replace a foodstuff that we simply do not need?’ asks its chief executive Lynne Elliot. Even Post conceded at the launch: ‘Quite frankly, vegetarians should remain vegetarians. That’s even better for the environment and for the animals than the cultured beef alternative.’ But interestingly, none of the objections raised by the two major UK vegetarian societies have anything to do with the two main ethical reasons for giving up meat in the first place: concern for animal welfare and the environment. In resisting IVM, the societies seem to be reflecting the views of their members rather than following clear moral principles. In a poll run on the website of the Vegetarian Society, nearly four in five said they would not eat IVM, while fewer than 7 per cent said they would. But why should there be such reluctance among vegetarians (who, for the purpose of this argument, I’ll take to include vegans) to welcome IVM when, from an animal-welfare point of view, it is nothing other than good news? Even if it doesn’t turn out to be commercially viable, the case for cultured meat rests very heavily on the unacceptability of intensive animal farming, and so shines a light on the ethical objections to the meat industry. The only logical way to make sense of the reluctance of many vegetarians to back IVM is that their choices are not as driven by animal welfare and environmental considerations as we — and they — assume. Perhaps a distate for eating meat is a visceral feeling that is only loosely connected to a ethically motivated imperative not to cause undue suffering to animals. Many people cannot distinguish between their ‘all-things-considered’ moral judgment and their unmediated gut feelings, mistaking reflex revulsion for ethical insight. Ingrid Newkirk, the president and co-founder of PETA, is refreshingly free of this confusion, which is perhaps why she can welcome IVM, even though she would not eat it. ‘Any flesh food is totally repulsive to me,’ she told NBC News. ‘But I am so glad that people who don’t have the same repulsion as I do will get meat from a more humane source.’ All of us, not just vegetarians, are at risk of confusing our base disgust and distaste with high principle. ‘Natural’ food feels right, ‘synthetic’ food feels wrong, so we are all-too-quick to dismiss the evidence that lab meat might be a good thing after all. And if you’re motivated to find the evidence that supports your gut feeling, there’s plenty from which you can pick and choose. But there is a huge difference between building your position on a firm evidence base and building an evidence base to support your position. We might believe our moral reasoning is evidence-led, but more often than not, we find ourselves led only to the evidence that conforms to our existing views. The biggest obstacle to a more nuanced view of food ethics might be that it is so difficult to accept that ethics is not just about choosing between the good and the bad, but balancing different, competing goods. IVM highlights how there are several different desiderata for good food, and you can’t have all of them all of the time. We want food to be delicious, diverse, healthy, affordable, sufficient for everyone, environmentally sustainable, good for animal welfare, from farms that make the countryside more attractive and use as few chemicals as possible. Many of these values are in conflict with one another for much of the time. For example, ugly fields covered with polytunnels can produce tasteless but plentiful, cheap, environmentally friendly and nutritious vegetables. The tastiest, most cow-friendly outdoor reared beef is too expensive to feed the majority, and can produce more greenhouse gasses and use more land than that from intensively reared cattle. There are some crops for which yields are so much better if grown non-organically that it makes no rational sense to avoid using pesticides and herbicides that are well within safe levels. IVM is just the most recent, vivid example of how our desire for the natural, traditional and aesthetically appealing food can clash with the value we place on animal welfare, environmental sustainability and the humane imperative to feed the whole world well. Almost all the coverage of IVM has glossed over this pluralism, presenting commentators as either for or against, period. Perhaps that’s because balancing competing goods on a case-by-case basis is difficult, and we’d much prefer the simplicity of arranging them in a hierarchy, so that we always know what trumps what. It’s just too hard to accept that sometimes two things we highly prize, such as animal welfare and conservation, can pull in different directions. We easily fall prey to the philosophical myth that the good is unitary and whole, rather than plural and fragmented. We need to accept that most of the time ‘one can only gain one value at the expense of another, that whatever one chooses entails the sacrifice of something else’ as the political theorist Isaiah Berlin put it so eloquently in one of his letters. We need to reach a point where we are neither romantically devoted to traditional, small-scale farming methods nor addicted to technological fixes The challenge of IVM remains more theoretical than real for now. The two volunteer tasters at the launch were reasonably positive about the end product, but IVM clearly has a long way to go before it can compete with real mince for taste, while copies of proper cuts such as steaks and chops are nowhere near being made. Hanni Rützler, an Austrian food scientist, praised the ‘perfect’ consistency and ‘intense taste’, but found it ‘not that juicy’, due to the failure to grow fat, and that it needed more seasoning. Josh Schonwald, author of The Taste of Tomorrow (2012), said it had the mouth feel of meat, but he too was underwhelmed by the flavour. More fundamentally, it is not yet clear that IVM could become a viable commercial proposition. Post’s team makes it sound as though the process is essentially simple, as though there isn’t much for the scientists to do. He describes how the cells ‘start dividing on their own’ and ‘naturally merge, arranging themselves into small myotubes’. By this process, ‘a few cells can become 10 tonnes of meat’. But given that Post himself doesn’t see commercial production being possible for another 10-20 years, clearly the process is much more difficult than this makes it sound. Part of the challenge is scaling up to industrial production. Christina Agapakis, a synthetic biologist at the University of California, Los Angeles, captured the scepticism of many in the scientific community about this when she wrote in a blog for Discover magazine in April 2012 that ‘scaling is the deus ex machina of so many scientific proposals’, though often dismissed as simply an ‘engineering problem’. She points to many technical problems that scaling-up will require which no-one yet knows how to solve. At the launch, I put some of these doubts to Post, who responded by pointing out that they had come this far in only a few years with a team of three or four people. That, he says, ‘already attests that we can come up with a viable solution within 10 years’. But progress is rarely linear and you cannot simply project where you will be in the future by drawing a graph of how far you have come to date and continuing the line. Whether or not IVM fills a gap in the food supply, other technologies are coming along that will. Indeed, some already have. Golden Rice is a β-carotene-enriched genetically modified variety to help counter vitamin A deficiency in the developing world, developed not by Monsanto for profit, but by a humanitarian group funded by the Rockefeller Foundation. Meanwhile, aeroponic systems allow crops to be grown efficiently in a nutrient-enriched mist, without the need for any soil or growing medium. That is why the ethical conundrum of IVM is important, whether or not the product itself becomes commercially viable. Of course we would be right to be cautious before buying into the often exaggerated claims made for the latest science, which is often based more on promissory than delivery notes. Suspicion of the claims of science to solve our food problems is well-grounded, as long as it does not descend into knee-jerk dismissals. But to forsake the benefits of any radical technological development because we are trapped in an overly simplistic, singular view of what good food means would be a terrible mistake. Science is not the answer to all our prayers but it has to be part of the answer to our food supply problems. We need to reach a point where we are neither romantically devoted to traditional, small-scale farming methods nor addicted to technological fixes. We have to be able to determine the roles of both. As John P Reganold, professor of soil science and agroecology at Washington Sate University, put it when assessing the merits of organics: ‘a blend of farming approaches is needed for future global food and ecosystem security’. There are many ways in which food can be good or bad, and we cannot afford to pretend that we can get all that we want without getting some of what we don’t. Just as a healthy body needs a balanced diet, so a healthy attitude to food production means balancing different goods and not allowing one to become the master virtue, denying the claims of all others. Like children who are told to eat up their greens, we have to accept that we sometimes have to swallow things that we find unpalatable, for our own good and that of the world. | Julian Baggini | https://aeon.co//essays/is-there-any-reason-vegetarians-can-t-eat-lab-grown-meat | |
Physics | If human history turns on the tilt of the multiverse, can we still trust our ideas of achievement, progress and morality? | Andy Murray’s unexpectedly strong start against Roger Federer in the Wimbledon 2012 final put the Daily Telegraph columnist Matthew Norman in a science-fiction mood. ‘It seemed we’d been transported to one of those parallel universes into which Doctor Who likes to slip with insouciant ease,’ he commented. A year later, that alternative world became reality, as Murray took the title, leaving journalists to apply the same familiar image to others. Contrasting Murray with the doubles champion Jonny Marray — who still rents a flat and drives a Ford Fiesta, despite holding a Grand Slam title — the Daily Mail opined: ‘The stark reality is that the two champions, who share a passion for tennis, live and work in a parallel universe.’ Where did this idea of parallel universes come from? Science fiction is an obvious source: in the 1960s, Captain Kirk met his ‘other self’ in a Star Trek episode called ‘Mirror, Mirror’, while Philip K Dick’s novel The Man in the High Castle (1963) imagined an alternate world in which the US was a Nazi puppet state. Since then, the idea has become mainstream, providing the image of forking paths in the romantic comedy Sliding Doors (1998), and the spine-chilling ‘What if?’ in Philip Roth’s novel The Plot Against America (2004), which envisaged the anti-Semitic aviator Charles Lindbergh defeating Roosevelt in 1940. But there’s also science fact. In 1935, Erwin Schrödinger proposed his famous thought experiment involving a cat in a box whose life or death is connected to a quantum event, and in 1957 the American physicist Hugh Everett developed his ‘many worlds’ theory, which proposed that the act of opening Schrödinger’s box entailed a splitting of universes: one where the cat is alive, and another where it is dead. Recently, physicists have been boldly endorsing a ‘multiverse’ of possible worlds. Richard Feynman, for example, said that when light goes from A to B it takes every possible path, but the one we see is the quickest because all the others cancel out. In The Universe in a Nutshell (2001), Stephen Hawking went with a sporting multiverse, declaring it ‘scientific fact’ that there exists a parallel universe in which Belize won every gold medal at the Olympic Games. For Hawking, the universe is a kind of ‘cosmic casino’ whose dice rolls lead to widely divergent paths: we see one, but all are real. Borges never considered how many millions of light years any poor soul would need to travel in order to find so much as a page worth reading Surprisingly, however, the idea of parallel universes is far older than any of these references, cropping up in philosophy and literature since ancient times. Even the word ‘multiverse’ has vintage. In a journal paper dating from 1895, William James referred to a ‘multiverse of experience’, while in his English Roses collection of 1899, the poet Frederick Orde Ward gave the term a spiritual cast: ‘Within, without, nowhere and everywhere;/Now bedrock of the mighty Multiverse…’ At the far reaches of this hidden history is Democritus, who believed the universe to be made of atoms moving in an infinite void. Over time, they would combine and recombine in every possible way: the world we see around us is just one arrangement among many that are all certain to appear. For Epicurus, who thought that atoms sometimes undergo a sudden random movement (‘swerve’) the whole future is not mapped out by mechanical principles, as it is for Democritus. Its paths are multiple. Epicureanism was the doctrine that survived into Roman times — as a philosophy of life in general, not just a physical theory. It was celebrated by Lucretius’s poem De Rerum Natura, and by Cicero in a passage of the Academica: Would you believe that there exist innumerable worlds… and that just as we are at this moment close to Bauli and are looking towards Puteoli, so there are countless persons in exactly similar spots with our names, our honours, our achievements, our minds, our shapes, our ages, discussing the very same subject?For Epicurean atomists, history was a succession of accidental collisions. Human affairs were subject to the laws of matter, or pure chance, not the will of gods, and everywhere and always the outcomes of events might have been otherwise. Thus Livy (not an atomist, though a believer in chance) speculated on what might have transpired if Alexander the Great had invaded Italy. Such ‘What if?’ scenarios were shunned by later Christian historians, who saw divine providence as the principle guiding the grand course of human affairs. As Shakespeare’s Hamlet put it: ‘There’s a divinity that shapes our ends,/Rough-hew them how we will.’ In the 17th century, the mathematician and philosopher Gottfried von Leibniz introduced a new kind of multiverse. He was intrigued by the way that so many natural processes appear ‘optimised’— soap bubbles minimise surface area by being spherical; light beams take the quickest route through space. Detecting the work of a divine hand, Leibniz proposed that the universe is optimised in every detail by God. Thus was born ‘optimism’, the idea (ruthlessly parodied by Voltaire in Candide) that we live in the best of all possible worlds. Applying the theory to the problem of why evil exists, Leibniz gave it graphic form, as a pyramid, infinite and many-roomed, in each of which is a possible world. At the pyramid’s peak is the one true world we inhabit. Leibniz modelled the various possible lives of the notorious Sextus Tarquinius, speculating that in most rooms Sextus leads a virtuous life, but in the highest he rapes Lucretia and is banished. Why is that the best of possible worlds? Because his banishment leads to the founding of the Roman Republic: an evil act produces a greater good. Or, as optimists say nowadays when trying to come to terms with disaster, everything happens for a reason. Unlike Democritus (who was an atheist), Leibniz insisted that the possible worlds exist purely in the mind of God, who selects one of them for true existence. Like a hologram, his universe is projected by God into every mind and made consistent by a ‘principle of harmony’. What makes it authentic is God’s benevolence: he wouldn’t play the nasty trick of making us believe in the reality of a false world. That scenario would be left for much later writers to contemplate, in darkly sinister films such as The Truman Show (1998) and The Matrix (1999). Alexander Pope’s poem ‘An Essay on Man’ (1734) helped to sustain a Leibnizian optimism (‘Whatever is, is right’), and only in the 19th century do we see a significant re-emergence of the idea that the world might be shaped by chance after all. The British scholar Isaac D’Israeli, father of the future prime minister Benjamin, speculated in 1823 that ‘often on a single event revolve the fortunes of men and of nations’. In the essay ‘Of a History of Events Which Have Not Happened’ (1830), he paid tribute to Livy’s ancient example by exploring historical ‘What ifs?’ that imagined Cromwell forming an alliance with Spain, or a Muslim Britain under Saracen domination, where ‘we should have worn turbans, combed our beards instead of shaving them, [and] have beheld a more magnificent architecture than the Grecian’. D’Israeli’s essay is one of the first examples of the ‘alternate history’ genre that Philip K Dick and so many others would take up, often as a subversive reaction against the ruling elite’s assumption that they were entitled to power. Before then, the upheavals of French politics proved fertile territory for such revisionist speculation. Napoléon et la conquête du monde (1812), by Louis Geoffroy, brought the victorious emperor to Britain, while in 1854 he made it to India in Joseph Méry’s Histoire de ce qui n’est pas arrive (‘History of What Never Happened’). Charles Renouvier’s Uchronie (1876) offered a complete rewriting of European history. Most interesting of all, Louis-Auguste Blanqui’s L’éternité par les astres (1872) offered an updated version of the Democritean multiverse which used 19th-century atomic theory to argue that there exist physically real planets where Napoleon won the battle of Waterloo. Blanqui, a lifelong revolutionary agitator, was imprisoned by every regime under which he lived. The Paris Communards demanded his release so that he could be their president, and Karl Marx said that, had their wish been granted, the Commune might not have fallen. But Blanqui was too left-wing even for Marx and his collaborator Friedrich Engels, who denounced him as an anarchist. Like Engels, Blanqui dabbled in scientific speculation. He’d picked up enough contemporary science to appreciate the probabilistic nature of two great theories of the time: thermodynamics and natural selection. He also appreciated the intimate connection between political ideology and scientific interpretation. It was Marx who said that the theory of natural selection was essentially a description of capitalism without the concept of class conflict, but Blanqui would surely have appreciated the observation. In a village you don’t bump into the stranger who changes your life; in a city you might Marx, in fact, had studied Democritean atomism for his PhD in philosophy, and his own theory of history was similarly mechanistic: the ultimate rise of the proletariat was as inevitable as the fall of an apple. History is progress, and can only go one way, propelled by the class struggle. For Blanqui, atomism implied a different universe, for as well as planets where revolution succeeds, there are also ones where it has failed, or is failing right now. Every moment is effectively an eternity in space, repeated in different places in every possible variation. Historical progress is therefore illusory — a local phenomenon without meaning in the greater multiverse. Blanqui’s bleak but supposedly rational vision could be seen as a pseudoscientific counterpart to other 19th-century nightmares of the educated mind: the heat-death of the universe or the extinction of species. It was a vision that later gripped the German literary critic and philosopher Walter Benjamin. In the 1920s, Benjamin embarked on a study of 19th-century Paris that would become The Arcades Project, a mass of quotation and commentary that remained in a fragmentary and disordered state when he died in 1940. One offshoot was the essay ‘On Some Motifs in Baudelaire’, in which Benjamin comments on the rise of gambling and speculation, the way that each throw of the dice represents a new start, a new world. He compares this to the factory conveyor belt, where each component is brand-new yet identical to the one before. The machine operator spends a day endlessly repeating some simple physical gesture, then finds amusement in doing the same thing at a slot machine. The mechanised world, like capitalism itself, is an apparent offer of constantly renewed hope, when really the one thing it must produce in order to perpetuate itself is a sense of constantly increased need. For Benjamin, what was crucially new in the 19th-century world view was the crowd – that is, the statistical mass. He didn’t cite thermodynamics or natural selection, but instead two stories, ‘The Cousin’s Corner Window’ (1822) by E T A Hoffmann and ‘The Man of the Crowd’ (1840) by Edgar Allan Poe, which dramatised this new way of seeing the collective rather than the individual. With it came the rise of chance as a factor in people’s lives. In a village you don’t bump into the stranger who changes your life; in a city you might. In the course of his studies, Benjamin discovered Baudelaire’s particular fascination with Blanqui, and this is perhaps how, in the late 1930s, Benjamin came to read L’éternité par les astres, writing excitedly to his fellow philosopher Max Horkheimer about it. According to Benjamin, Blanqui’s theory represents a tragic capitulation to everything the old revolutionaries fought against — a vision of bourgeois existence remodelled as cosmology, with replicated worlds like mass-produced consumer goods, inspiring passivity and boredom. Around the same time, and quite independently, Blanqui’s book was read in Argentina by Jorge Luis Borges, who shared it with his friend and fellow writer Adolfo Bioy Casares. Casares was inspired to write a short story called ‘La trama celeste’ (1948) — ‘The Celestial Plot’ — in which an airman crashes and finds himself in a parallel world; the plot hinges on copies of Blanqui’s book that are differently paginated in the different universes. Borges himself refers to Blanqui in his 1936 essay ‘A History of Eternity’. For Borges, Blanqui’s vision is heavenly — like the archive he describes in his short story ‘The Library of Babel’ (1941), a building that contains every possible book among its randomly generated texts. What Borges never considered in his story is how many millions of light years any poor soul would need to travel in order to find as much as a page worth reading. To any real inhabitant, the library would be indistinguishable from chaos, and it is only from the lofty vantage point of literary contemplation that the place assumes order. For Benjamin, however, the multiverse is not an intellectual parlour game, but a damning reflection of the society that produces it. In a proposed introduction to The Arcades Project, Benjamin compares Blanqui’s multiverse to Baudelaire’s poem ‘Les sept vieillards’ (‘The Seven Old Men’, 1857), which takes a succession of identical old men and imagines them as a single man multiplied in some ‘infamous plot’. This, says Benjamin, is an image of modernity itself. An eventual consequence of such dehumanisation was the rise of fascism. In one of his last essays on the philosophy of history, Benjamin says that to understand fascism we need to appreciate how in an oppressive regime every day is presented as a new emergency. Given that war is the archetypal splitting point for alternative history, perhaps the threat of fascism accounts for the rise in popularity of parallel-world stories in the 1940s, sometimes as wish-fulfilling escapism, as in the film It’s a Wonderful Life (1946), or else as warnings of alternatives that could so easily happen. In Borges’s short story ‘Tlön, Uqbar, Orbis Tertius’ (1940), for example, an invented world causes reality itself to cave in. A year later, Borges again worked the theme of branching realities, in a wartime spy story called ‘The Garden of Forking Paths’. When the American physicist Seth Lloyd met Borges at a Cambridge reception in 1983, he asked him if he was aware that this story eerily prefigured Hugh Everett’s concept of many worlds. Borges had never heard of it, but said that it didn’t surprise him that physics sometimes followed literature. After all, physicists are readers, too (of literature, and of history). The theories of Everett, Feynman and others are highly technical, but physicists looking to explain them in ordinary language draw on the same common stock of image and metaphor as everyone else, and that stock has been around for a very long time. Feynman’s idea about light taking every possible path is essentially Leibniz’s, only without the need for God. That said, modern-day optimism is no longer a belief that all things were created for the best; it’s the belief that in the cosmic lottery, anyone can be a winner, whether you’re Andy Murray or just buying a lottery ticket. After training as a theoretical physicist, I took up the harmless occupation of writing novels while many of my contemporaries went into finance. And look where they got us. Optimism is all very well, but sometimes scientists need to be reminded that ‘fact’ is a word to be handled with care. Corrections, 10/09/2013: This essay previously stated that the passage of the Academica quoted was by Livy; its author was Cicero. This has been corrected. | Andrew Crumey | https://aeon.co//essays/can-the-multiverse-explain-the-course-of-history | |
Anthropology | If you want to know what money is, don’t ask a banker. Take a leap of faith and start your own currency | I don’t have much money. Then again, I couldn’t say exactly how much I do have. In the Co‑operative Bank’s IT system is a database entry that says I have £97 in electronic money. In my wallet I have three £10 notes, pieces of paper with pictures of the Queen on them, issued by the Bank of England, promising me £30. I have six pieces of metal, too — copper-nickel alloy and nickel-plated steel, to be exact — valued at 59 pence in total. So that’s £127.59. My wallet also contains a £5 Brixton Pound note — a local currency found only in the south London neighbourhood where I live — which I got as change from a local bar called Kaff. On my mobile phone is a series of text messages telling me that I have B£39.61 on my online Brixton Pound account. This is electronic money stored in a database, sent to me by local residents to pay for copies of a book I wrote. If I open my computer, there is a programme called MultiBit that connects to a distributed computer file called the blockchain. It contains a record that says I have 3.8462 BTC — or bitcoins: I earned them selling the book to people in Israel, the USA, Sweden, the Netherlands, Switzerland and New Zealand via an exchange called BitPay. That’s not all. I have a Totnes Pound, 20 South African rand, and a couple of hours saved up on a timebanking platform. It’s possible that I have other money, too — but that depends on what you mean by ‘money’. The authorities have their own view of what counts as legal tender, or ‘real’ money but, even so, the definition has been controversial since antiquity. One of the most interesting sections in the British Museum is a display of counterfeit coins — not ‘real’ according to the powers that be, but frequently real enough to the people who used them. I, too, was once a counterfeit of sorts. A left-wing chancer with a background in anthropology, I immersed myself for two years in the financial sector, styling myself as whatever a derivatives broker is supposed to be. I wanted to uncover what exactly goes on in networks of power, to experience those dynamics first-hand, like a kind of gonzo journalist. For the most part, the work involved trying to peddle giant bets — known as swaps and options contracts — to various investors, banks and corporations. In the process of doing it, though, I stopped being a counterfeit and started becoming a real broker, internalising the cultural codes of high finance. I was thrown out in 2010, and ever since then I’ve been fascinated with developing other ways in which one might go about exploring, rewiring and playing with the financial system. How can a piece of paper store 10 pounds of value? What are pounds anyway? The trouble is, while my experiences in mainstream finance taught me a lot about what the industry does, they only gave me glimpses into the nature of the mysterious stuff it does it with. The financial system exists, above all, to mediate flows of money, not to question what money is. Investment banks create financial instruments that steer money from one place to another, with built-in sub-conduits to siphon it back — extractive devices used by investors. To draw an analogy with computer coding, we might say that financial instruments are analogous to ‘high-level’ programming languages such as Java or Ruby: they let you string commands together in order to perform certain actions. You want to get resources from A to B over time? Well, we can program a financial instrument to do that for you. By contrast, money itself is more like a low-level programming language, very hard to see or to understand but closer to gritty reality. It’s like your computer’s machine code, interfacing with the hardware: even the experts take it for granted. You might need to explain to someone what a bond is, but nobody is ever ‘taught’ what money is. We just see it in action and learn how to use it. Indeed, the only way you ever tend to get a glimpse of it is in relief, by contrast with another programming language — or when you’re forced to build it again from scratch. Most people never get this opportunity. If you had to ask the average person in the financial industry to explain what money is, they’d probably rattle off the economics textbook description: ‘a means of exchange, a unit of account, and store of value’. This is not a very helpful definition. How can a piece of paper store 10 pounds of value? What are pounds anyway? Money sounds like it’s an ordinary noun, a self-contained object. If it is a physical object, it must be paper or metal or digits on a computer. And yet, very few of us think a £5 note is merely a piece of paper: the same idea of £5 can be expressed in electronic or metal form, after all. No, to delve deeper into the nature of exchange we need some first principles, and these take time to uncover. The best guides in this half-lit territory turn out to be not economists, but rather the loose bands of monetary mystics and iconoclasts who are developing strange new exchange technologies. They are a scattered tribe, with elders including the likes of Bernard Lietaer, Ellen Brown and Thomas Greco, sages passing on tips on how to breach the Monetary Matrix. I find myself sitting in a pub in Stockwell, south London, with Matthew Slater, a nomadic developer of open-source currency systems, and discover that he no longer bothers with a bank account or a fixed address. ‘Once you’ve broken through appearances,’ he says, ‘there’s no going back.’ I was introduced to Slater by a common friend named Jem Bendell. The two of them met in an Indian hippy colony called Auroville in Tamil Nadu, and Bendell is now a professor of sustainability leadership at the University of Cumbria and an undercover monetary revolutionary. I have an enduring memory of a TED talk in which he ripped a banknote into pieces, trying to make the point that the paper itself doesn’t have value. Everyone in the room winced, as if to say: ‘You could have given that to me!’, but Bendell was getting at a powerful idea. If money is an object, it must be an enchanted one, charged up with value by a subtle cultural process. Why else would anyone exchange a box of coffee for a rectangle of paper? Shopkeepers accept the paper because they believe that it has abstract value — because, in turn, they believe that others believe it, too. The value is circular, predicated on each person believing that others believe in it. You hand over your money and claim something from the shopkeeper, almost as if the coffee were owed to you. Then they take the claim that was previously yours and use it to claim something from someone else. We all trust each other to value money — but this still means that every monetary transaction is a leap of faith. And faith has to be carefully maintained. The idea that money rests on belief makes some people uncomfortable. There’s a popular assumption that money emerged, in some dimly imagined past, out of barter — that it was just a more precise means to make direct exchanges. Mainstream economists still trot out this explanation, although anthropologists such as David Graeber have shown that there is little evidence for it. It’s a reassuring myth, one that obscures the deep difference between barter and monetary exchange. In the former, nothing is left unresolved and no faith is required. It’s a closed circuit, a like-for-like swap. By contrast, money transactions are never closed; you pass on an abstract, faith-based claim in exchange for a tangible good. At any given moment, the economy consists of only a limited number of actual goods and services, which people attempt to claim with money. If there are too many claims floating about, then the underlying value allocated to each one must decrease. This is what we call inflation, and it is a source of permanent anxiety in monetary communities. Which brings us to our first major psychological lever for maintaining faith in money: how the money is created. Banknotes in the early 19th-century US often used to trade at a discount to metal coins. People distrusted them, and the ability of those who controlled their production to make as many as they wanted; there was something comforting about the hard metallic certainty of coins and their obvious connection to scarce metals. Remnants of this mentality are still found in ‘goldbugs’ and ‘sound money’ advocates, people who think that gold and other precious metals have an intrinsic value lacking in government-backed electronic or paper money. When central banks issue more money (which commercial banks subsequently amplify via fractional reserve banking), the goldbugs shout: ‘Can’t you all see that government money is madness! Run to gold!’ It’s the dissidents who seem mad, while the people swapping useful goods for bits of metal, paper or meaningless electronic data look perfectly sane But gold is no more intrinsically valuable than official government money. It fails the desert island test of value: would you want it if you were stranded alone somewhere remote? All it really has is beauty and scarcity, plus an ancient cultural link with the idea of currency. Gold reveals the basic tension in the textbook definition of money — the idea that it can be both a store of value and a means of exchange. For the most part, when something is truly valuable in itself, people are disinclined to part with it (why swap rum for something else when you can just drink it?). The monetary enchantment appears to work best when its tokens merely appear valuable, while containing no true value. That’s how you convince people both to accept them and to give them away, rather than consuming them. And so the second trick to making money believable depends on how it appears. The British Museum is full of the kind of shiny trinkets with which vainglorious monarchs liked to parade around — a whole aspirational structure imprinted into useless metals and cowrie shells. Seashells and golden baubles like these are excellently suited for widespread exchange: you might not be able to do much with them, but they’re easy enough to keep and powerful people appear to collect them, which is good enough for most of us. Do such artefacts ‘store’ value? Of course not. That’s just a socially sanctioned pretence, a pragmatic, covert, wink-wink, let’s-not-talk-about-this charade. Nevertheless, over time, the fantasy becomes such a deep habit that no one person can stand up and point out the absurdity of the situation. At that point, it’s the dissidents who seem mad, while the people swapping useful goods for bits of metal, paper or meaningless electronic data look perfectly sane. This gives us our third major psychological prop to the monetary faith: what are the sanctions for not using it? If enough people believe in it, you will lose out if you don’t follow suit. Similar network effects arise with social platforms such as Facebook — in theory, you can opt out, but only if you don’t mind the penalty of social exclusion. What’s more, when integrated into a national legal system and backed by the threat of violence, the sanctions for dissent become rather persuasive. At the unsubtle end of the spectrum, the monarch may simply throw you in jail for not using her preferred currency. Still, as the government of Zimbabwe learnt the hard way, you cannot order people to believe in money if the underlying story isn’t convincing enough. In 2008, cultural belief in the Zimbabwe dollar began to evaporate (a process euphemistically known as ‘hyperinflation’). The underlying economic activity in the country — based largely on agriculture — was falling into disarray. Despite strenuous attempts at coercion, the government was eventually forced to turn to ‘hard’ foreign currencies, more convincingly backed by robust economies. By 2009, the process of disenchantment was complete: the Zimbabwe dollar was dead. A newspaper called The Zimbabwean started running ads printed on the old notes, saying: ‘It’s cheaper to print this on money than paper.’ Money is a complex cultural technology. Sometimes it breaks down, but that just gives us all the more reason to tinker with its blueprints. Each new system, though, will have its own psychological side effects and trade-offs. We know what mainstream currencies such as the US dollar are good for: overcoming barriers between buyers and sellers who don’t particularly know or trust each other. The trouble is, by reducing the need for personal trust relationships, mainstream money encourages social atomisation, to the point where arms-length purchasing starts to seem like the only valid kind of transaction. Look at the obsession economists have with measuring gross domestic product in monetary terms. GDP is supposed to reflect what is created in society, but if my grandad builds me a table in his workshop, it’s not included in GDP, and if I buy a table in Ikea, it is. The former is not considered valid production, whereas the latter is. That is arbitrary, and obviously something has gone wrong. The problem might be that mainstream money is simply too efficient. It numbs people into forgetting that it’s a socially pragmatic delusion, and so we take it for granted, just as we take oxygen for granted. But oxygen is vital for our survival, whereas money is only an intermediary tool, cushioning us from the base-level economic production that actually sustains us. There’s an ecological dimension to this, of course, which is my overriding concern. Our ability to exchange without knowing where things come from blinds us to the real core of the economy: not money, but the physical things we must wrench from the ground by human effort, which is underpinned by agricultural systems, and energised by sunlight, water and soil. The more we abstract and fetishise money as a thing in itself, the more we lose sight of its sources and its goals. We get confused, and feel disempowered relative to those who wield larger flows of it. Sealed off from inquiry in its hermetic shell, money distorts our perceptions of one another. We can’t seem to remember that it is merely one means of exchange among many. What energies would we unleash if we were to break open that opaque shell and split the monetary atom? The recording of the transaction in the cloud appears as nothing more than a series of numbers on my computer, yet there I am, putting the physical book in the post I don’t suggest that we start suspiciously eying the change handed back to us in shops. Coins are designed to be symbolic and abstract, and perhaps that’s required. What we need though, is the right kind of doublethink, a carefully managed form of cognitive dissonance that allows us to see the centuries of real technological change that lie behind them, the oil and dirt and oceanic dragnets, the limestone blast furnaces and neon lighting systems and chemicals synthesised from fossilised trees. Perhaps we can tinker with the word ‘money’ itself. It’s a mass noun, like you’d use for some kind of tangible substance, and it makes money sound like a ‘thing-in-itself’. As a kind of mental discipline, I prefer to use a different word: COGAS. It stands for ‘claims on goods and services’, which is all money really is. And now I have a word that describes itself, as opposed to one that actively hides its own reality. It sounds trivial, but the linguistic process works a subtle psychological loop, referring money to the world outside itself. It’s a simple way to start peeling back the façade. To go deeper, we need to start actually experimenting with alternatives. Money, we know, is a technology, and it can be designed for different purposes — always for exchange, of course, but with auxiliary characteristics. To uncover and experience these characteristics, I actively play around with as many esoteric currencies as possible. For example, I’ve sold 17 copies of my book with Bitcoin, an electronic ‘crypto-currency’ that captured the public imagination earlier this year. It’s is a fascinating experiment, and one of the first alternative currencies to reach any significant scale without the help of legal backing. Friends got involved with Bitcoin a few years ago when it still seemed almost entirely ethereal. They talked a lot about Bitcoin’s elegant and robust design, but in my opinion, it got much of its initial boost simply because it seemed so cool, rebellious, mysterious and novel. To my eye, it has a certain offbeat, grimy digital aesthetic, like electronic graffiti. Now, as more people enter the scene, Bitcoin is solidifying into a monetary reality. An entire community has grown up around digital signatures, imbuing them with imagined value, giving the currency a life of its own that eventually extends far beyond the initial in-group. If doubt can destroy a currency, then a cultlike process of evangelical faith-building can create one. Bitcoin has one very interesting attribute and, to understand it, we should look to the theoretical disagreement between the Enlightenment political philosophers Thomas Hobbes and Jean-Jacques Rousseau. Hobbes was a pessimist. In order to escape the ‘war of all against all’ that he believed was the natural state of human existence, he thought that individuals ought to submit to the will of a central sovereign who could act as arbitrator in disputes. We’ve traditionally associated this with political authoritarianism, but it also serves quite well as a description of mainstream money. Most of our money nowadays is electronic, ‘stored’ in an oligopoly of private banks that are themselves connected via a central bank. We rely on these institutions to keep an accurate score of our electronic money. Brett has £97, they say. Trust us, we have it recorded in our IT database. Rousseau had the radical idea that Hobbes’s arbitrator needn’t be a single dictator or oligarchy. Instead, it could be the collective, or the general will. So it goes with Bitcoin. In place of a centralised, hierarchical group of banks keeping score of the money, a decentralised network of individuals records every transaction on a virtual ledger called the blockchain. Brett has 3.8462 BTC, the network says. We’ve collectively kept score of that. In this scenario, my ‘account balance’ is less like the ruling of a sovereign and more like the result of a popular democratic vote, mediated via a computer network. In normal, bank-mediated electronic transactions, someone tells their bank to send money to your bank and then the banks edit the buyer’s and seller’s account balances to reflect the transaction. It’s a strange feeling, then, to accept an electronic payment with no banks involved, then pack a book into a parcel and write an address in America on it simply because someone announced to a network of strangers that they had paid me. The recording of the transaction in the cloud appears as nothing more than a series of numbers on my computer, yet there I am, putting the physical book in the post. Why do I do it? I accept bitcoins for the same reason that I accept normal money. Mainstream money is used to replace a specific trust relationship with a general one. I take British pounds from a specific person because I trust that I can exchange those pounds for something else within the general British pound-using community. Likewise, I take the bitcoins from the specific buyer because I trust that the broader Bitcoin community will accept them from me in exchange for something of intrinsic value. The main departure from normal electronic money is that Bitcoin uses a decentralised network in place of a central hierarchy. The advantages are anonymity, a sense of freedom and, it has been argued, a more resilient system. Indeed, Bitcoin has become especially popular with libertarian anarcho-capitalists because its supply is regulated, not by the government or the private banking oligarchy but by ‘apolitical’ mathematical protocols. Theoretically, anyone can make new ones, but it’s a very time-consuming technical process and the operation can only ever be performed a finite number of times, a bit like gold mining. For this reason, bitcoins are naturally scarce. Inevitably, Bitcoin evangelists often fall into the trap of thinking that the value of their favoured currency must somehow be more ‘real’ than that of government-backed money, just like goldbugs do. In fact, all it means is that one form of monetary faith has replaced another. If digital currencies such as Bitcoin attempt to spread exchange to a global level, local currencies aim to concentrate economic energy into a small space. My Brixton Pounds, and other local currencies, such as the Bristol Pound and the Toronto Dollar, are only redeemable within local neighbourhoods. Where Bitcoin seeks to attack the centralising tendency of a nation state, the Brixton Pound is a (gentle) attack on structures that undermine local community resilience. Part of the essence of the Brixton Pound is its deliberate inconvenience. We’re used to thinking that absence of friction must be a virtue in any transaction, but a local economy thrives on inconvenience. Chance encounters in the street market help to bind a community together and give it richness of character. We lose all that when we opt for the robotic mediocrity of the automatic till and debit-card reader. It’s a fine balance, of course, and the Brixton Pound recently added a pay-by-text system that combines the ease of electronic payment with the richness of local exchange. I still have to hand-deliver the books I sell that way — knocking on the door of a guy called Rico who writes a food blog, having a chat, getting to know someone I didn’t know before. The inconvenience is where the connection comes in. Who knows? Maybe that apparently ‘inefficient’ method of hand-delivery could lead to a productive new relationship. Hey Rico, I need someone to cater an event, can you help me out? Extreme efficiency of exchange, in other words, might come at the cost of developing new business contacts. Then there’s my friend Matthew Slater, who has developed an open-source software package that allows you to start a whole range of different currencies. Would you like it time-based, commodity-based, mutual credit, or fiat? His package can do it all. It’s like a Swiss army knife of options. In this, perhaps we can see one vision for the future of money — a future based on diversity, where we can move in and out of exchange technologies as we need. And perhaps only a handful of different monetary systems are required. Local currencies such as the Brixton Pound are about localising, whereas digital currencies such as Bitcoin are about decentralising and internationalising. Meanwhile, so-called demurrage currencies — deliberately engineered to lose value over time — are about energising the volume of transactions, as people have no incentive to hoard them. Freicoin, for example, is an attempt to create a demurrage version of Bitcoin, neutralising the hoarding impulse built into Bitcoin’s psychological structure. Then there are timebanks — community systems where people directly exchange labour time, which is about humanising and reconnecting exchange. Increasing the diversity of monetary technologies doesn’t only have the potential to create more resilient economies. It is also empowering. Perceiving a choice (especially when it is limited) is generally a good thing, leading to richer, more self-directed experience: I have chosen to use this technology of exchange for this particular purpose. The alternative is unconscious acceptance of a dominant monoculture — one that, even if it is stable, is psychologically destructive (or, at the very least, dull). I have fewer Brixton COGASs now than when I started this article, fewer claims on goods and services. I drank three coffees in Kaff, claiming them by B£ text message, and the caffeine has sent my thoughts spiralling in all directions. For example, it’s all very well having these alternative currencies, but perhaps what I need is a Brixton Pound savings account, a way to pass on my stored up B£ to someone else who might use them. I need higher-level programming. Imagine financial instruments and micro-investment banks situated in a local community, steering local currencies into local projects via Brixton bonds and community shares. And think of this: young people from the local area could run it. Screw Cityboys when we can have inner-city boys. The micro-investment banking revolution will come in due course. And who knows, maybe one day even ordinary bankers will understand the nature of the money they so confidently claim to control. They’re behind the curve though. It’s the monetary mystics who are on the cusp of a wave. In my in-box is a message from Eli Gothill, the designer of the Twitter gift currency #PunkMoney. He tells me about a peer-to-peer payments system called Ripple. It could be the next big thing, he says. I sign up and receive 1000 free Ripples, and I have no idea what that means. | Brett Scott | https://aeon.co//essays/so-you-want-to-invent-your-own-currency | |
Mood and emotion | I am an atheist and a Quaker. Does it matter what I believe, when I recognise that religion is something I need? | I read voraciously as a child, even obsessively. Our family drove across the US when I was 13, and I hardly noticed the scenery, eyes glued to a mammoth book of classic science-fiction stories. As I recall, this ticked off my parents. Magical stories moved me to tears. I vividly remember, at the age of eight, being surprised at how deeply the second chapter of Astrid Lindgren’s The Brothers Lionheart (1973) affected me. The narrator dies and goes to the land where sagas come from, and when he arrives he finds that all that he had wanted — to be strong, healthy and beautiful like his older brother — has come to be, and that his beloved brother is there, too. And this is just the beginning of the story. I remember arriving at the end of Penelope Farmer’s The Summer Birds (1962) and weeping bitterly as the children, who have spent the summer flying about the English countryside, return gravity-bound to school while their lonely classmate and the strange bird-boy fly off together over the ocean. This essay wasn’t supposed to be about the stories I read as a child. It was supposed to be about how I manage to be an atheist within a religious community, and why I dislike the term ‘atheism’. But however I wrote that essay, the words died on the page. That story comes down to this: I do not believe in God, and I am bored with atheism. But these stories, this magic, and their presence in my heart, they don’t bore me — they are alive. Even though I know they are fiction, I believe in them. My main religious practice today is meeting for worship with the Religious Society of Friends: I am a Quaker. Meeting for worship, to a newcomer, can feel like a blank page. Within the tradition of Friends, it is anything but blank: it is a religious service, expectant waiting upon the presence of God. So it’s not meditation, or ‘free time’. But that’s how I came to it at first, at the Quaker high school I attended. After almost 15 years away, I returned to Quakerism in 1997. During a difficult patch of my life, a friend said I needed to do something for myself. So I started going to the meeting house on Sunday mornings. What I rediscovered was the simple fact of space. It was a hiatus, a parenthesis inserted into a complicated, twisty life. Even if it held nothing but breath, it was a relief, and in that relief, quiet notions emerged that had been trampled into the ground of everyday life. ‘Truth’, in the sense that it was used by 17th-century Friends, had less to do with verifiable evidence, and more to do with sense of being a ‘true friend’, an arrow flying true I am an atheist, but I’ve been bothered for a long time by the mushiness I’ve found in the liberal spiritual communities that admit non-believers such as me. I’ve spent the better part of two decades trying to put my finger on the source of this unease, but it is not a question to be solved by the intellect: it must be lived through. Several years ago, Marshall Massey, a fellow Friend, pointed out to me that ‘truth’, in the sense that it was used by 17th-century Friends, had less to do with verifiable evidence, and more to do with sense of being a ‘true friend’, an arrow flying true. It was about remaining on a path, not about conforming to the facts of the world. This points to a deep truth: we humans are built for a different kind of rigour than that of evidentiary fact. It is at least as much about consistency, discipline and loyalty as it is about the kinds of repeatable truth that we hold up in a scientific world as fundamental. This is a large part of what drew me to the Friends rather than the Unitarians or other study groups. Binding oneself to specific patterns, habits, and language seems to have the effect of providing a spine, and Quakers seemed to have more of this spine than other groups I was attracted to. It was a partial solution of my sense of mushiness, but it certainly didn’t solve everything. If you are really going to be part of a community, just showing up for the main meal is not enough: you need to help cook and clean up. So it has been with me and the Quakers: I’m concerned with how my community works, and so I’ve served on committees (Quakerism is all about committees). There’s pastoral care to accomplish, a building to maintain, First-Day School (Quakerese for Sunday School) to organise. And there’s the matter of how we as a religious community will bring our witness into the world. Perhaps this language sounds odd coming from a non-theist, but as I hope I’ve shown, I’m not a non-theist first. I’ve been involved in prison visiting, and have been struck at the variety of religious attitudes among volunteers: some for whom the visiting is in itself ministry, and others for whom it’s simply social action towards justice (the programme grew out of visiting conscientious objectors in the Vietnam era). The point is: theological differences are not necessarily an issue when there’s work to be done. But the committees I’ve been in have also had a curious sense of unease too, a sense of something missing, and I’ve now been on three committees that were specifically charged with addressing aspects of a sense of malaise and communal disconnect. The openness of liberal religion resonates strongly with me. It means I do have a place, and not just in the closet or as a hypocrite. But I wonder if my presence, and the presence of atheists and skeptics such as me, is part of the problem. People need focus. There’s a reason why the American mythologist Joseph Campbell chose the hero’s journey as his fundamental myth: we don’t give out faith and loyalty to an idea nearly as readily as we give it to a hero, a person. And so a God whom we understand not as a vague notion or spirit, but as a living presence, with voice and face and will and command — this is what I think most people want in a visceral way. In some ways, it’s what we need. And I do not believe such a God exists in our universe. Here’s a peculiar sense I’ve been getting in Friends committee meetings: we often don’t know how to seek the will of God; we are uncertain whether God actually possesses will. And yet, I suspect that the way out of our tortuous debates is to stop arguing and submit. That submission — because that’s what it is, in the same sense that islam means submission — is what pulls us out of ourselves and gets us lined up to do what needs doing instead of arguing about whose idea is better. In the 17th century, the Quaker theologian Robert Barclay argued for the bodiless Holy Spirit as the only way to reach Christ and then God. Nowadays, we might find comfort in the spirit alone, or the Light, as Quakers describe an inwardly detected sense of the divine. But submission to something so vague is difficult. We might love and treasure and ‘hold our beloved friends in the Light’, but that’s not a humbling of self, a laying low of ego, and that is what I believe we are missing. How can we do that? How can I do that? Submitting to something I am pretty sure doesn’t exist? How can I bow down to a fiction? I did it all the time as a child. Open the cover of the book, and I’m in that world. If I’m lucky, and the book is good enough, some of that world comes with me out into the world of atoms and weather, taxes and death. It’s a story, and sometimes stories are stronger than stuff. Maybe part of the trick is realising that it doesn’t have to be just my little bubble of fiction. I can read a novel, or I can go gaming into the evening with friends. I can watch a ballet on a darkened stage, or I can roar along to my favourite band in the mosh pit. I hated school dances with a passion, yet I have been a morris dancer for 23 years now: I just had to find the form that was a right fit. I don’t pray aloud, or with prescribed formulas. But I can ask Whatever-There-Is a question, or ask for help from the universe, or say thank you. And now that I’m in a place with a better fit, sometimes I get answers back. And so there I am, a confirmed skeptic, praying in a congregation. Maybe that god would tell us not to tramp over the earth in armies, pretending we are bigger than we are, and that dying is OK A year and a half ago, our family began worshipping with a smaller Conservative Friends group. Conservative Friends are socially and theologically liberal but stricter in adhering to older Quaker practices. The group uses the Montessori-based Godly Play curriculum for the children: it’s all about stories. Every session begins with a quieting and a focusing. The leader tells a story from the Bible or from the Quaker story book. Then ‘wondering’ questions are asked that spur the children to reflect on what’s going on, and what they would do in the same situation. I wish I’d had this great programme as a child. The teacher is a good storyteller who clearly loves the kids, and they love the stories and the time with their friends. To me, it’s such an improvement on school-style lessons. It says: this is a different kind of knowing and learning — this is not about facts and theories you need to learn, but about the stories we want to become part of your life. I love facts and theories, the stuff of the world. I spend most of my life wrestling and dancing with all this amazing matter. As the Australian comic Tim Minchin says in his rant-poem ‘Storm’ (2008): ‘Isn’t this enough? Just this world? Just this beautiful, complex, wonderfully unfathomable world?’ And yes, it’s enough. We don’t need to tell lies about the real world in order to make it magical. But we do still need impossible magic for our own irrational selves. At any rate, I do. Because I don’t feel stuff-and-logic-based explanations deep down in my toes. There are no miracle stories of flying children there, or brothers reborn into the land where the sagas come from. The language of ‘stuff is all there is’ tells me that I can — even ought to — be rational and sensible, but it doesn’t make me want to be. ‘Atheism’ tells me what I am not, and I yearn to know what I am. What I am has a spine, it’s a thing I must be true to, because otherwise it evaporates into the air, dirt and water of the hard world. Maybe I — we — need to start small, rebuilding gods that we talk to, and who talk back. Or just one whom we can plausibly imagine, our invisible friend. Maybe part of our problem is that we don’t actually want to talk to the voice of Everything, because Everything has gotten so unfathomably huge. George Fox, the founder of Quakerism, didn’t have to think about light years, let alone billions of light years. The stars now are too far away to be our friends or speak to us in our need. Maybe we could talk to a god whom we imagined in our house. Maybe we could ask what is wanted, and hear what is needed. Maybe that god would tell us not to tramp over the earth in armies, pretending we are bigger than we are, and that dying is OK, because it’s just something that happens when your life is over. Maybe we would ask for help and comfort from unexpected places, and often enough receive it and be thankful for it. Maybe we need to name that little god something other than God, because maybe our God has a boss who has a boss whose boss runs the universe. Maybe we name this god Ethel, or Larry, or Murgatroyd. Maybe there is no god but God… or maybe there just is no God. And maybe it doesn’t matter. Maybe we just tell stories that ring true to us and say up-front that we know they are fiction. We can let people love these stories or hate them. Maybe imagining impossible things — such as flying, the land where sagas come from, God — is what is needed. Maybe we don’t need the gods to be real. Maybe all we need is to trust more leaps of the imagination. | Nat Case | https://aeon.co//essays/im-an-atheist-so-how-did-i-end-up-such-a-committed-quaker | |
Film and visual culture | Schizophrenics used to see demons and spirits. Now they talk about actors and hidden cameras – and make a lot of sense | Clinical psychiatry papers rarely make much of a splash in the wider media, but it seems appropriate that a paper entitled ‘The Truman Show Delusion: Psychosis in the Global Village’, published in the May 2012 issue of Cognitive Neuropsychiatry, should have caused a global sensation. Its authors, the brothers Joel and Ian Gold, presented a striking series of cases in which individuals had become convinced that they were secretly being filmed for a reality TV show. In one case, the subject travelled to New York, demanding to see the ‘director’ of the film of his life, and wishing to check whether the World Trade Centre had been destroyed in reality or merely in the movie that was being assembled for his benefit. In another, a journalist who had been hospitalised during a manic episode became convinced that the medical scenario was fake and that he would be awarded a prize for covering the story once the truth was revealed. Another subject was actually working on a reality TV series but came to believe that his fellow crew members were secretly filming him, and was constantly expecting the This-Is-Your-Life moment when the cameras would flip and reveal that he was the true star of the show. Few commentators were able to resist the idea that these cases — all diagnosed with schizophrenia or bipolar disorder, and treated with antipsychotic medication — were in some sense the tip of the iceberg, exposing a pathology in our culture as a whole. They were taken as extreme examples of a wider modern malaise: an obsession with celebrity turning us all into narcissistic stars of our own lives, or a media-saturated culture warping our sense of reality and blurring the line between fact and fiction. They seemed to capture the zeitgeist perfectly: cautionary tales for an age in which our experience of reality is manicured and customised in subtle and insidious ways, and everything from our junk mail to our online searches discreetly encourages us in the assumption that we are the centre of the universe. But part of the reason that the Truman Show delusion seems so uncannily in tune with the times is that Hollywood blockbusters now regularly present narratives that, until recently, were confined to psychiatrists’ case notes and the clinical literature on paranoid psychosis. Popular culture hums with stories about technology that secretly observes and controls our thoughts, or in which reality is simulated with virtual constructs or implanted memories, and where the truth can be glimpsed only in distorted dream sequences or chance moments when the mask slips. A couple of decades ago, such beliefs would mark out fictional characters as crazy, more often than not homicidal maniacs. Today, they are more likely to identify a protagonist who, like Jim Carrey’s Truman Burbank, genuinely has stumbled onto a carefully orchestrated secret of which those around him are blandly unaware. These stories obviously resonate with our technology-saturated modernity. What’s less clear is why they so readily adopt a perspective that was, until recently, a hallmark of radical estrangement from reality. Does this suggest that media technologies are making us all paranoid? Or that paranoid delusions suddenly make more sense than they used to? The first person to examine the curiously symbiotic relationship between new technologies and the symptoms of psychosis was Victor Tausk, an early disciple of Sigmund Freud. In 1919, he published a paper on a phenomenon he called ‘the influencing machine’. Tausk had noticed that it was common for patients with the recently coined diagnosis of schizophrenia to be convinced that their minds and bodies were being controlled by advanced technologies invisible to everyone but them. These ‘influencing machines’ were often elaborately conceived and predicated on the new devices that were transforming modern life. Patients reported that they were receiving messages transmitted by hidden batteries, coils and electrical apparatus; voices in their heads were relayed by advanced forms of telephone or phonograph, and visual hallucinations by the covert operation of ‘a magic lantern or cinematograph’. Tausk’s most detailed case study was of a patient named ‘Natalija A’, who believed that her thoughts were being controlled and her body manipulated by an electrical apparatus secretly operated by doctors in Berlin. The device was shaped like her own body, its stomach a velvet-lined lid that could be opened to reveal batteries corresponding to her internal organs. Although these beliefs were wildly delusional, Tausk detected a method in their madness: a reflection of the dreams and nightmares of a rapidly evolving world. Electric dynamos were flooding Europe’s cities with power and light, their branching networks echoing the filigree structures seen in laboratory slides of the human nervous system. New discoveries such as X-rays and radio were exposing hitherto invisible worlds and mysterious powers that were daily discussed in popular science journals, extrapolated in pulp fiction magazines and claimed by spiritualists as evidence for the ‘other side’. But all this novelty was not, in Tausk’s view, creating new forms of mental illness. Rather, modern developments were providing his patients with a new language to describe their condition. At the core of schizophrenia, he argued, was a ‘loss of ego-boundaries’ that made it impossible for subjects to impose their will on reality, or to form a coherent idea of the self. Without a will of their own, it seemed to them that the thoughts and words of others were being forced into their heads and issued from their mouths, and their bodies were manipulated like puppets, subjected to tortures or arranged in mysterious postures. These experiences made no rational sense, but those who suffered them were nevertheless subject to what Tausk called ‘the need for causality that is inherent in man’. They felt themselves at the mercy of malign external forces, and their unconscious minds fashioned an explanation from the material to hand, often with striking ingenuity. Unable to impose meaning on the world, they became empty vessels for the cultural artefacts and assumptions that swirled around them. By the early 20th century, many found themselves gripped by the conviction that some hidden operator was tormenting them with advanced technology. A desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA Tausk’s theory was radical in its implication that the utterances of psychosis were not random gibberish but a bricolage, often artfully constructed, of collective beliefs and preoccupations. Throughout history up to this point, the explanatory frame for such experiences had been essentially religious: they were seen as possession by evil spirits, divine visitations, witchcraft, or snares of the devil. In the modern age, these beliefs remained common, but alternative explanations were now available. The hallucinations experienced by psychotic patients, Tausk observed, are not typically three-dimensional objects but projections ‘seen on a single plane, on walls or windowpanes’. The new technology of cinema replicated this sensation precisely and was in many respects a rational explanation of it: one that ‘does not reveal any error of judgment beyond the fact of its non-existence’. In their instinctive grasp of technology’s implicit powers and threats, influencing machines can be convincingly futuristic and even astonishingly prescient. The very first recorded case, from 1810, was a Bedlam inmate named James Tilly Matthews who drew exquisite technical drawings of the machine that was controlling his mind. The ‘Air Loom’, as he called it, used the advanced science of his day — artificial gases and mesmeric rays — to direct invisible currents into his brain, where a magnet had been implanted to receive them. Matthews’s world of electrically charged beams and currents, sheer lunacy to his contemporaries, is now part of our cultural furniture. A quick internet search reveals dozens of online communities devoted to discussing magnetic brain implants, both real and imagined. The Gold brothers’ interpretation of the Truman Show delusion runs along similar lines. It might appear to be a new phenomenon that has emerged in response to our hypermodern media culture, but is in fact a familiar condition given a modern makeover. They make a primary distinction between the content of delusions, which is spectacularly varied and imaginative, and the basic forms of delusion, which they characterise as ‘both universal and rather small in number’. Persecutory delusions, for example, can be found throughout history and across cultures; but within this category a desert nomad is more likely to believe that he is being buried alive in sand by a djinn, and an urban American that he has been implanted with a microchip and is being monitored by the CIA. ‘For an illness that is often characterised as a break with reality,’ they observe, ‘psychosis keeps remarkably up to date.’ Rather than being estranged from the culture around them, psychotic subjects can be seen as consumed by it: unable to establish the boundaries of the self, they are at the mercy of their often heightened sensitivity to social threats. In this interpretation, the Truman Show delusion is a contemporary expression of a common form of delusion: the grandiose. Those experiencing the onset of psychosis often become convinced that the world has undergone a subtle shift, placing them at centre-stage in a drama of universal proportions. Everything is suddenly pregnant with meaning, every tiny detail charged with personal significance. The people around you are often complicit: playing pre-assigned roles, testing you or preparing you for an imminent moment of revelation. Such experiences have typically been interpreted as a divine visitation, a magical transformation or an initiation into a higher level of reality. It is easy to imagine how, if they descended on us without warning today, we might jump to the conclusion that the explanation was some contrivance of TV or social media: that, for some deliberately concealed reason, the attention of the world had suddenly focused on us, and an invisible public was watching with fascination to see how we would respond. The Truman Show delusion, then, needn’t imply that reality TV is either a cause or a symptom of mental illness; it might simply be that the pervasive presence of reality TV in our culture offers a plausible explanation for otherwise inexplicable sensations and events. Here was what Hollywood executives always assumed audiences hated: filmmakers playing smart with their audiences, pulling the rug from under their feet Although the formation of delusions is unconscious and often a response to profound trauma, the need to construct plausible scenarios gives it many commonalities with the process of writing fiction. On rare occasions the two overlap. In 1954, the English novelist Evelyn Waugh suffered a psychotic episode during which he thought he was persecuted by a cast of disembodied voices who were discussing his personality defects and spreading malicious rumours about him. He became convinced that the voices were being orchestrated by the producers of a recent BBC radio interview, whose questions he had found impertinent; he explained their ability to follow him wherever he went by invoking some hidden technology along the lines of a radionics ‘black box’, an enthusiasm of one of his neighbours. His delusions became increasingly florid but, as Waugh described it later, ‘it was not in the least like losing one’s reason… I was rationalising all the time, it was simply one’s reason working hard on the wrong premises.’ Waugh turned the experience into a brilliant comic novel, The Ordeal of Gilbert Pinfold (1957). Its protagonist is a pompous but brittle writer in late middle age, whose paranoia about the modern world is fed by an escalating regime of liqueurs and sedatives until it erupts in full-blown persecution mania (a familiar companion for Waugh, who abbreviated it discreetly to ‘pm’ in letters to his wife). Although the novel smoothes the edges of Waugh’s bizarre associations and winks knowingly at Pinfold’s surreal predicament, the fictionalisation blurs into the narrative that emerged during Waugh’s psychosis: even for his close friends, it was impossible to tell exactly where the first ended and the second began. By the time that Gilbert Pinfold was published, narratives of paranoia and psychosis were starting to migrate from psychiatry into popular culture, and first-person memoirs of mental illness were appearing as mass-market paperbacks. The memoir Operators and Things: The Inner Life of a Schizophrenic (1958), written under the pseudonym of Barbara O’Brien, told the remarkable story of a young woman pursued across America on Greyhound buses by a shadowy gang of ‘operators’ with a mind-controlling ‘stroboscope’, but was presented and packaged like a sci-fi thriller. Conversely, thrillers were incorporating plot lines that assumed the reality of mind-controlling technologies. Richard Condon’s best-selling novel The Manchurian Candidate (1959) turned on the premise that a hypnotised subject might be programmed to respond unconsciously to pre-arranged cues. In the book’s memorable and, with hindsight, eerily prescient climax, an unwitting agent is triggered to assassinate the US president. Condon’s deadpan satire was informed by Cold War anxieties about brainwashing and communist infiltration, but it also drew upon recent popular exposés of the ‘subliminal’ techniques of advertising, such as The Hidden Persuaders (1958) by Vance Packard. It was expertly pitched into the disputed territory of psychology’s black arts: a paranoid tale for paranoid times, which still informs a thriving netherworld of internet-driven conspiracy theories. Perhaps the emergence of the influencing machine into modern fiction can be most clearly traced through the career and afterlife of Philip K Dick, who combined the profession of prolific pulp novelist with an intense hypochondriacal fascination with psychotic disorders. He diagnosed himself as both paranoid and schizophrenic at various times, and included schizophrenic characters in his fiction; many of his novels and short stories have a closer kinship with memoirs of mental illness than with the robots-and-spaceships tales of his sci-fi contemporaries. They play out restless iterations of the idea that consensus reality is in fact the construct of some form of influencing machine: a simulation designed to test our behaviour, a set of memories generated artificially to maintain us in our daily routines, a consumer fantasy sold to us by power-hungry corporations or obligingly furnished by mind-reading extraterrestrials. Dick’s novel Time Out of Joint came out the same year as The Manchurian Candidate and was a clear ancestor of The Truman Show. Its protagonist, Ragle Gumm, inhabits a bland suburban world that is gradually revealed to be a military simulation; the sole purpose of the set-up is to keep Gumm happily playing what he believes to be a battleship puzzle in the daily paper, while in reality his solutions are directing missile strikes in a war of which he is kept unaware. Throughout his lifetime, Dick remained a cult author. His devoted but limited fan base prized his work for its uncompromising weirdness, never imagining that it might be assimilated into the popular mainstream. Indeed, after a series of visionary episodes in 1974, which he elaborated into a complex personal theology, Dick’s work became still more hermetic, remote even to his core sci-fi readership. He died in 1982, just as his novel Do Androids Dream of Electric Sheep? (1968) was being adapted into Ridley Scott’s Bladerunner, its storyline soft-pedalled by a studio that believed audiences would reject the climactic revelation that its protagonist was himself an android. Subsequent film adaptations of Dick’s work, such as Paul Verhoeven’s Total Recall (1990), also toned down the radical reality switches of the source, limiting them to an opening set-up before settling into a final reel of uncomplicated action. In 1999, however, The Matrix struck boxoffice gold with a script that presented a classic Dickian influencing machine in stark and undiluted form. An inquisitive hacker stumbles onto the ultimate secret: the so-called ‘real world’ is a simulation, concealing a reality in which all humanity has been enslaved and harvested by machines for centuries. Buttressed by reams of dialogue exploring the scenario’s existential implications, here was precisely what Hollywood executives previously assumed audiences hated: filmmakers playing smart with their audiences, pulling the narrative rug from under their feet, even toying with the fourth wall of the drama. And yet it was a sensational success, resonating far beyond the multiplex and inserting its memes deep into a wider culture that was now hosted by the internet. As the American screenwriter William Goldman observed in his memoir Adventures in the Screen Trade (1983), in the movie business, nobody knows anything. It might be that a similarly bold metafiction could have been successful years earlier, but it feels more likely that the cultural impact of The Matrix reflected the ubiquity that interactive and digital media had achieved by the end of the 20th century. This was the moment at which the networked society reached critical mass: the futuristic ideas that, a decade before, were the preserve of a vanguard who read William Gibson’s cyberspace novels or followed the bleeding-edge speculations of the cyberculture magazine Mondo 2000 now became part of the texture of daily life for a global and digital generation. The headspinning pretzel logic that had confined Philip K Dick’s appeal to the cult fringes a generation earlier was now accessible to a mass audience. Suddenly, there was a public appetite for convoluted allegories that dissolved the boundaries between the virtual and the real. When James Tilly Matthews drew the invisible beams and rays of the Air Loom in his Bedlam cell, he was describing a world that existed only in his head. But his world is now ours: we can no longer count all the invisible rays, beams and signals that are passing through our bodies at any moment. Victor Tausk argued that the influencing machine emerged from a confusion between the outside world and private mental events, a confusion resolved when the patient invented an external cause to make sense of his thoughts, dreams and hallucinations. But the modern word of television and computers, the virtual and the interactive, blurs traditional distinctions between perception and reality. When we watch live sporting events on giant public screens or follow breaking news stories in our living rooms, we are only receiving flickering images, yet our hearts beat in synchrony with millions of unseen others. We Skype with two-dimensional facsimiles of our friends, and model idealised versions of ourselves for our social profiles. Avatars and aliases allow us to commune at once intimately and anonymously. Multiplayer games and online worlds allow us to create customised realities as all-embracing as The Truman Show. Leaks and exposés continually undermine our assumptions about what we are revealing and to whom, how far our actions are being monitored and our thoughts being transmitted. We manipulate our identities and are manipulated by unknown others. We cannot reliably distinguish the real from the fake, or the private from the public. In the 21st century, the influencing machine has escaped from the shuttered wards of the mental hospital to become a distinctive myth for our times. It is compelling not because we all have schizophrenia, but because reality has become a grey scale between the external world and our imaginations. The world is now mediated in part by technologies that fabricate it and partly by our own minds, whose pattern-recognition routines work ceaselessly to stitch digital illusions into the private cinema of our consciousness. The classical myths of metamorphosis explored the boundaries between humanity and nature and our relationship to the animals and the gods. Likewise, the fantastical technologies that were once the hallmarks of insanity enable us to articulate the possibilities, threats and limits of the tools that are extending our minds into unfamiliar dimensions, both seductive and terrifying. | Mike Jay | https://aeon.co//essays/a-culture-of-hyper-reality-made-paranoid-delusions-true | |
Consciousness and altered states | Consciousness is the ‘hard problem’, the one that confounds science and philosophy. Has a new theory cracked it? | Scientific talks can get a little dry, so I try to mix it up. I take out my giant hairy orangutan puppet, do some ventriloquism and quickly become entangled in an argument. I’ll be explaining my theory about how the brain — a biological machine — generates consciousness. Kevin, the orangutan, starts heckling me. ‘Yeah, well, I don’t have a brain. But I’m still conscious. What does that do to your theory?’ Kevin is the perfect introduction. Intellectually, nobody is fooled: we all know that there’s nothing inside. But everyone in the audience experiences an illusion of sentience emanating from his hairy head. The effect is automatic: being social animals, we project awareness onto the puppet. Indeed, part of the fun of ventriloquism is experiencing the illusion while knowing, on an intellectual level, that it isn’t real. Many thinkers have approached consciousness from a first-person vantage point, the kind of philosophical perspective according to which other people’s minds seem essentially unknowable. And yet, as Kevin shows, we spend a lot of mental energy attributing consciousness to other things. We can’t help it, and the fact that we can’t help it ought to tell us something about what consciousness is and what it might be used for. If we evolved to recognise it in others – and to mistakenly attribute it to puppets, characters in stories, and cartoons on a screen — then, despite appearances, it really can’t be sealed up within the privacy of our own heads. Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem. I believe that the easy and the hard problems have gotten switched around. The sheer scale and complexity of the brain’s vast computations makes the easy problem monumentally hard to figure out. How the brain attributes the property of awareness to itself is, by contrast, much easier. If nothing else, it would appear to be a more limited set of computations. In my laboratory at Princeton University, we are working on a specific theory of awareness and its basis in the brain. Our theory explains both the apparent awareness that we can attribute to Kevin and the direct, first-person perspective that we have on our own experience. And the easiest way to introduce it is to travel about half a billion years back in time. In a period of rapid evolutionary expansion called the Cambrian Explosion, animal nervous systems acquired the ability to boost the most urgent incoming signals. Too much information comes in from the outside world to process it all equally, and it is useful to select the most salient data for deeper processing. Even insects and crustaceans have a basic version of this ability to focus on certain signals. Over time, though, it came under a more sophisticated kind of control — what is now called attention. Attention is a data-handling method, the brain’s way of rationing its processing resources. It has been found and studied in a lot of different animals. Mammals and birds both have it, and they diverged from a common ancestor about 350 million years ago, so attention is probably at least that old. Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention. The most basic, measurable, quantifiable truth about consciousness is simply this: we humans can say that we have it I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations. And then what? Just as fins evolved into limbs and then into wings, the capacity for awareness probably changed and took on new functions over time. For example, the attention schema might have allowed the brain to integrate information on a massive new scale. If you are attending to an apple, a decent model of that state would require representations of yourself, the apple, and the complicated process of attention that links the two. An internal model of attention therefore collates data from many separate domains. In so doing, it unlocks enormous potential for integrating information, for seeing larger patterns, and even for understanding the relationship between oneself and the outside world. Such a model also helps to simulate the minds of other people. We humans are continually ascribing complex mental states — emotions, ideas, beliefs, action plans — to one another. But it is hard to credit John with a fear of something, or a belief in something, or an intention to do something, unless we can first ascribe an awareness of something to him. Awareness, especially an ability to attribute awareness to others, seems fundamental to any sort of social capability. It is not clear when awareness became part of the animal kingdom’s social toolkit. Perhaps birds, with their well-developed social intelligence, have some ability to attribute awareness to each other. Perhaps the social use of awareness expanded much later, with the evolution of primates about 65 million years ago, or even later, with our own genus Homo, a little over two million years ago. Whenever it arose, it clearly plays a major role in the social capability of modern humans. We paint the world with perceived consciousness. Family, friends, pets, spirits, gods and ventriloquist’s puppets — all appear before us suffused with sentience. But what about the inside view, that mysterious light of awareness accessible only to our innermost selves? A friend of mine, a psychiatrist, once told me about one of his patients. This patient was delusional: he thought that he had a squirrel in his head. Odd delusions of this nature do occur, and this patient was adamant about the squirrel. When told that a cranial rodent was illogical and incompatible with physics, he agreed, but then went on to note that logic and physics cannot account for everything in the universe. When asked whether he could feel the squirrel — that is to say, whether he suffered from a sensory hallucination — he denied any particular feeling about it. He simply knew that he had a squirrel in his head. We can ask two types of questions. The first is rather foolish but I will spell it out here. How does that man’s brain produce an actual squirrel? How can neurons secrete the claws and the tail? Why doesn’t the squirrel show up on an MRI scan? Does the squirrel belong to a different, non-physical world that can’t be measured with scientific equipment? This line of thought is, of course, nonsensical. It has no answer because it is incoherent. The second type of question goes something like this. How does that man’s brain process information so as to attribute a squirrel to his head? What brain regions are involved in the computations? What history led to that strange informational model? Is it entirely pathological or does it in fact do something useful? So far, most brain-based theories of consciousness have focused on the first type of question. How do neurons produce a magic internal experience? How does the magic emerge from the neurons? The theory that I am proposing dispenses with all of that. It concerns itself instead with the second type of question: how, and for what survival advantage, does a brain attribute subjective experience to itself? This question is scientifically approachable, and the attention schema theory supplies the outlines of an answer. Attention is a data-handling method used by neurons. It isn’t a substance and it doesn’t flow One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B. Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. Did my neurons create that feeling? Arrow B is the mysterious route from consciousness back to the neurons. Arrow B attracts much less scholarly attention than Arrow A, but it is just as important. The most basic, measurable, quantifiable truth about consciousness is simply this: we humans can say that we have it. We can conclude that we have it, couch that conclusion into language and then report it to someone else. Speech is controlled by muscles, which are controlled by neurons. Whatever consciousness is, it must have a specific, physical effect on neurons, or else we wouldn’t be able to communicate anything about it. Consciousness cannot be what is sometimes called an epiphenomenon — a floating side-product with no physical consequences — or else I wouldn’t have been able to write this article about it. Any workable theory of consciousness must be able to account for both Arrow A and Arrow B. Most accounts, however, fail miserably at both. Suppose that consciousness is a non-physical feeling, an aura, an inner essence that arises somehow from a brain or from a special circuit in the brain. The ‘emergent consciousness’ theory is the most common assumption in the literature. But how does a brain produce the emergent, non-physical essence? And even more puzzling, once you have that essence, how can it physically alter the behaviour of neurons, such that you can say that you have it? ‘Emergent consciousness’ theories generally stake everything on Arrow A and ignore Arrow B completely. The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. There is no need for anything to be transmuted into ghost material, thought about, and then transmuted back to the world of cause and effect. Some people might feel disturbed by the attention schema theory. It says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information. I admit that the theory does not feel satisfying; but a theory does not need to be satisfying to be true. And indeed, the theory might be able to explain a few other common myths that brains tell themselves. What about out-of-body experiences? The belief that awareness can emanate from a person’s eyes and touch someone else? That you can push on objects with your mind? That the soul lives on after the death of the body? One of the more interesting aspects of the attention schema theory is that it does not need to turn its back on such persistent beliefs. It might even explain their origin. The heart of the theory, remember, is that awareness is a model of attention, like the general’s model of his army laid out on a map. The real army isn’t made of plastic, of course. It isn’t quite so small, and has rather more moving parts. In these respects, the model is totally unrealistic. And yet, without such simplifications, it would be impractical to use. If awareness is a model of attention, how is it simplified? How is it inaccurate? Well, one easy way to keep track of attention is to give it a spatial structure — to treat it like a substance that flows from a source to a target. In reality, attention is a data-handling method used by neurons. It isn’t a substance and it doesn’t flow. But it is a neat accounting trick to model attention in that way; it helps to keep track of who is attending to what. And so the intuition of ghost material — of ectoplasm, mind stuff that is generated inside us, that flows out of the eyes and makes contact with things in the world — makes some sense. Science commonly regards ghost-ish intuitions to be the result of ignorance, superstition, or faulty intelligence. In the attention schema theory, however, they are not simply ignorant mistakes. Those intuitions are ubiquitous among cultures because we humans come equipped with a handy, simplified model of attention. That model informs our intuitions. Many people believe that they can feel a subtle heat when someone is staring at them What are out-of-body experiences then? One view might be that no such things exist, that charlatans invented them to fool us. Yet such experiences can be induced in the lab, as a number of scientists have now shown. A person can genuinely be made to feel that her centre of awareness is disconnected from her body. The very existence of the out-of-body experience suggests that awareness is a computation and that the computation can be disrupted. Systems in the brain not only compute the information that I am aware, but also compute a spatial framework for it, a location, and a perspective. Screw up the computations, and I screw up my understanding of my own awareness. And here is yet another example: why do so many people believe that we see by means of rays that come out of the eyes? The optical principle of vision is well understood and is taught in elementary school. Nevertheless, developmental psychologists have known for decades that children have a predisposition to the opposite idea, the so-called ‘extramission theory’ of vision. And not only children: a study by the psychologist Gerald Winer and colleagues at the University of Ohio in 2002 found that about half of American college students also think that we see because of rays that come out of the eyes. Our culture, too, is riddled with the extramission theory. Superman has X-ray vision that emanates from his eyes toward objects. The Terminator has red glowing eyes. Many people believe that they can feel a subtle heat when someone is staring at them. Why should a physically inaccurate description of vision be so persistent? Perhaps because the brain constructs a simplified, handy model of attention in which there is such a thing as awareness, an invisible, intangible stuff that flows from inside a person out to some target object. We come pre-equipped with that intuition, not because it is physically accurate but because it is a useful model. Many of our superstitions — our beliefs in souls and spirits and mental magic — might emerge naturally from the simplifications and shortcuts the brain takes when representing itself and its world. This is not to say that humans are necessarily trapped in a set of false beliefs. We are not forced by the built-in wiring of the brain to be superstitious, because there remains a distinction between intuition and intellectual belief. In the case of ventriloquism, you might have an unavoidable gut feeling that consciousness is emanating from the puppet’s head, but you can still understand that the puppet is in fact inanimate. We have the ability to rise above our immediate intuitions and predispositions. Let’s turn now to a final — alleged — myth. One of the long-standing questions about consciousness is whether it really does anything. Is it merely an epiphenomenon, floating uselessly in our heads like the heat that rises up from the circuitry of a computer? Most of us intuitively understand it to be an active thing: it helps us to decide what to do and when. And yet, at least some of the scientific work on consciousness has proposed the opposite, counter-intuitive view: that it doesn’t really do anything at all; that it is the brain’s after-the-fact story to explain itself. We act reflexively and then make up a rationalisation. There is some evidence for this post-hoc notion. In countless psychology experiments, people are secretly manipulated into making certain choices — picking green over red, pointing left instead of right. When asked why they made the choice, they confabulate. They make up reasons that have nothing to do with the truth, known only to the experimenter, and they express great confidence in their bogus explanations. It seems, therefore, that at least some of our conscious choices are rationalisations after the fact. But if consciousness is a story we tell ourselves, why do we need it? Why are we aware of anything at all? Why not just be skilful automata, without the overlay of subjectivity? Some philosophers think we are automata and just don’t know it. This idea that consciousness has no leverage in the world, that it’s just a rationalisation to make us feel better about ourselves, is terribly bleak. It runs against most people’s intuitions. Some people might confuse the attention schema theory with that nihilistic view. But the theory is almost exactly the opposite. It is not a theory about the uselessness or non-being of consciousness, but about its central importance. Why did an awareness of stuff evolve in the first place? Because it had a practical benefit. The purpose of the general’s plastic model army is to help direct the real troops. Likewise, according to the theory, the function of awareness is to model one’s own attentional focus and control one’s behaviour. In this respect, the attention schema theory is in agreement with the common intuition: consciousness plays an active role in guiding our behaviour. It is not merely an aura that floats uselessly in our heads. It is a part of the executive control system. In fact, the theory suggests that even more crucial and complex functions of consciousness emerged through evolution, and that they are especially well-developed in humans. To attribute awareness to oneself, to have that computational ability, is the first step towards attributing it to others. That, in turn, leads to a remarkable evolutionary transition to social intelligence. We live embedded in a matrix of perceived consciousness. Most people experience a world crowded with other minds, constantly thinking and feeling and choosing. We intuit what might be going on inside those other minds. This allows us to work together: it gives us our culture and meaning, and makes us successful as a species. We are not, despite certain appearances, trapped alone inside our own heads. And so, whether or not the attention schema theory turns out to be the correct scientific formulation, a successful account of consciousness will have to tell us more than how brains become aware. It will also have to show us how awareness changes us, shapes our behaviour, interconnects us, and makes us human. | Michael Graziano | https://aeon.co//essays/how-consciousness-works-and-why-we-believe-in-ghosts | |
Architecture | It might be a piece of paradise, a refuge from urban mayhem – but can a garden embody something deeper and wilder? | I am not a gardener. A few square metres of paved back yard offered me little scope for horticulture as a London child. Faced with a larger square of neglected lawn that came with a later house, my parents were helpless and I absorbed their bafflement. Yet I am passionate about gardens. They promise escape and comfort, a place to read and heady pleasures for the senses. In however small a space gardens offer a connection to the larger natural world for which we hanker. As a non-gardener in a garden, you take your ease; but you also journey back to some more demanding, instinctive self who requires a glimpse of a lost paradise — ‘the flowery lap of some irriguous valley’ — as the poet John Milton had it, when Adam and Eve walked with God in the evening. The British, of course, are mad for gardens — a hunger that is fed by countless gardening centres, gardening books, television programmes and newspaper columns, besides BBC Radio 4’s peerless Gardeners’ Question Time. So it is bewildering to note quite how disappointing many prize-winning gardens are. I’ve felt brutally let down by the annual Chelsea Flower Show in London, to which I’ve traipsed several times, only to find myself surrounded by bursting blooms, overcrowded and overripe, governed by a rebarbative design aesthetic. Once I stumbled upon Tom Stuart-Smith’s evocation of a hazel coppice in spring, a grove of flowering dogwoods with woodland plants beneath — phlox, tiarellas, pale-blue cranesbills and others I cannot name. But that was a daring exception to the dreary rule. Chelsea’s show gardens appeal to the botanist and box-ticker, rather than to the lost souls of urban England Chelsea is not an appropriate forum for the kind of planting and landscaping that you find at large country houses — like Rosemary Verey’s garden at Barnsley House in the Cotswolds, with its blend of formal lawns and ancient meadows; or Christopher Lloyd’s extended cottage garden, with its immense mixed borders and flowering meadows surrounding the manor at Great Dixter in East Sussex. But I’m puzzled that the poetry of gardening seems so elusive in formal contexts such as Chelsea. Gardening is an art, certainly, with rules and disciplines and many different tones, philosophies and styles. But there must be room for mystery, too, for what is artfully hidden as much as what is on display. It seems to me that the show gardens at Chelsea appeal to the botanist, the box-ticker, the admirer of individual species, rather than the lost souls of urban England looking for a solace provided by gardens that heal the rift between nature and man, or between the busy mind (eager to seize on geometries and concepts) and the dreaming subconscious. So what is the hidden hunger that drives us all, not just the professional botanist, into the garden? What fantasies are we playing out in our various idealisations and realisations of gardens? I‘ve been to gardens where my heart quickens and I recognise what I am looking for. A corner where old-fashioned rambling roses fly blowsily about a careful arch constructed from tangled sycamore, beech and ash; a meadow full of paint-bright wild flowers; a crooked line of worn stone steps fringed with ivy, running beside a silvery stream leading to a chilly pond with yellow flag irises and marsh marigolds; a colourful border with scented stocks and potentilla, delphiniums, peonies and cistus flaring for the day; parades of purple lavender blowing a French summer towards you. There’s the majesty of a classical 18th-century garden such as Stourhead in the autumn, and the nostalgic floribundance of Sissinghurst, created in the 1930s by the writer Vita Sackville-West. There is a delight in a neat vegetable garden leading from cabbages to courgettes to the crossed bamboo canes training runner beans, and beyond, under nets, the raspberries. There is the beguiling allure of wayward profusions of nasturtiums spilling from a window box, or a sequence of square box hedges in the mist, the dew at their ankles hinting at the mysteries of all plant growth — from water and earth to these silent sentinels. Such gardens hint at stories, summon memories, suggest without hectoring; and in all of them the landscaping, the curves of shrubbery in a lawn, the terracing or layout of paths, the walls and gateways are as significant as the planting. These seem spaces entirely created to serve an inner hankering. In his book Arcadia: England and the Dream of Perfection (2009), Adam Nicolson notes that this kind of gardening was born in Mesopotamia around 4000BC, alongside the earliest urban civilisations. Besides towns, temples and ziggurats, the ancient Sumerians built irrigation channels which brought water from lush marshy areas to the dry flatlands where cities grew. Succeeded by the Akkadian (c. 2380BC), Babylonian (c. 1900BC) and Assyrian (c. 1400BC) empires, the Sumerian vision was preserved in the evocations of gardens and groves found in the Akkadian Epic of Gilgamish. These enclosed outdoor spaces became images of watered perfection within the imperfect hubbub of the city. Indeed the Persian word for ‘an enclosure’, pairidaeza, is the source for our word ‘paradise’. We use gardens to cultivate food and medicine, and to extend interior design into the ‘outside room’, but for more than 2,000 years gardens have also allowed us to pursue a notion of abundance, ease and sensual beauty, where the cares of the world can be temporarily forgotten — even if the garden is as far removed from the realities of raw wilderness as from the human jungle of civic society. Nicolson is a grandson of Sackville-West and the husband of the gardener Sarah Raven; together they live at Sissinghurst Castle. In his thoughtful book he explains how the idea of Arcadia fused a Greek vision of bucolic, pipe-playing innocence, with the evocation, by the Roman poet Virgil, of a beneficent, sensuous pastoral landscape, available to his leisured shepherds and shepherdesses as a setting for love and poetry. In England, the idea of Arcadia reached its ripest expression in the Renaissance. It is there in the poetry of Edmund Spenser and Sir Philip Sidney, whose prose work Arcadia (1580) was written for his sister, Mary Herbert, the Countess of Pembroke, while he was staying at her idyllic country estate, Wilton in Wiltshire. Here the idea of Arcadia was embodied as much in the landscape and gardens of the estate as in the conduct of life — the apparently timeless feudal relations between the lord of the manor and his tenants. Peonies reach globed perfection just before they fall apart. As we lie in a flower-bordered dell, the midges come to bite The idea persists in Shakespeare’s play As You Like It, as in the writings of Ben Jonson. And, at the tail end of its brilliant trajectory, Andrew Marvell writes in his poem ‘The Garden’ (c. 1650) how ‘the mind, from pleasures less,/Withdraws into its happiness … Annihilating all that’s made/To a green thought in a green shade.’ Marvell wrote the poem in the seclusion of his patron’s house at Nun Appleton, in Yorkshire — where the Commonwealth-supporting Lord General Fairfax had retired temporarily from the bitter political fray. Marvell’s garden is thus a place of refuge and healing from the psychological trauma of civil war. By the time Milton published Paradise Lost in 1667, our mortal link with this Arcadian vision had been broken. This ‘happy rural seat’ in Eden — where ‘Flowers worthy of Paradise, which not nice Art/In beds and curious knots, but Nature boon/Poured forth profuse on hill, and dale, and plain’ — was irrecoverable. This highly wrought imaginative ideal was always fragile. And that, Nicolson notes, was part of its charm. The French Baroque painter Nicolas Poussin arranged his handsome shepherds in their golden landscape around a tomb with the enigmatic words inscribed, Et in Arcadia Ego, reminding us that even in Arcadia there is death. Gardens are at their most delicious at sunset, just as the light turns fiery and fades. Peonies reach globed perfection just before they fall apart. As we lie in a flower-bordered dell, the midges come to bite. It is no wonder that it is so hard to find the garden that matches our imagination. As soon as you think you have it, reality — often dark, sometimes predatory — kicks in. In 1819, John Keats, sitting in a Hampstead garden, recorded that it was only while the nightingale sang that he could sustain the mental picture: I cannot see what flowers are at my feet,Nor what soft incense hangs upon the boughs,But, in embalmed darkness, guess each sweetWherewith the seasonable month endowsThe grass, the thicket, and the fruit-tree wild;White hawthorn, and the pastoral eglantine;Fast fading violets cover’d up in leaves;And mid-May’s eldest child,The coming musk-rose, full of dewy wine,The murmurous haunt of flies on summer eves.For gardeners wrestling with the brute circumstances of rough earth, weeds, insects, and weather, it is impossible to compete with the images conjured in poetry. Yet these images are largely responsible for shaping our hopes of what gardens might be. Even Keats, whose brother was dying, knew only too well that Arcadia is not possible on any patch of our Earth: gardening, like poetry, offers a brief respite from ‘the fever, and the fret’ but no permanent escape. In the writings of the psychoanalyst Carl Jung, the Garden of Eden represents childhood — a period of life when nature and instinct give unstintingly. Gradually however, through childhood, adolescence and early adulthood, nature gives us over to consciousness and culture, effectively turning us out of the garden in a secular echo of the biblical fall. As Jung described the process in his 1930 essay ‘The Stages of Life’; ‘we are forced to resort to conscious decisions and solutions where formerly we trusted ourselves to natural happenings. Every problem, therefore, brings the possibility of a widening of consciousness, but also the necessity of saying goodbye to childlike unconsciousness and trust in nature.’ Jung believed that we are culturally haunted by this loss: ‘The biblical fall of man presents the dawn of consciousness as a curse.’ And so we long to return to the garden: ‘Something in us wishes to remain a child, to be unconscious or, at most, conscious only of the ego; to reject everything strange, or else subject it to our will; to do nothing, or else indulge our own craving for pleasure or power.’ Undoubtedly, gardens offer just such an opportunity for some people, where gardening is a kind of play — a chance to fashion an entire kingdom to suit your own unfettered fancy. For others, it is about a darker kind of subjugation. For the Victorian critic John Ruskin, gardening was certainly about the desire to return to his childhood and recreate the experience of Edenic bliss. In Modern Painters, Volume III (1860), Ruskin wrote: ‘The first thing which I remember as an event in my life was being taken by my nurse to the brow of Friar’s Crag on Derwentwater.’ This was the childhood memory to which he clung when, in 1871, he bought the paradisiacal, but dilapidated, Brantwood estate on Coniston Water in the Lake District — ‘near the lake-beach on which I used to play when I was seven years old’. Ruskin set about restoring the house and gardens. There had been formative visits to the Lake District with his parents when very young, which burned that landscape on his imagination, but otherwise he had grown up in south London, at Herne Hill and Denmark Hill, in houses with substantial gardens that he and his mother tended. Years later, he reported that, aged four, this ‘little domain answered every purpose of Paradise to me’. At Brantwood, Ruskin set about the combined strategy of creating a wild flower and rock garden (William Robinson had just published his hugely influential book, The Wild Garden, in 1870) and importing the apple trees and cherry trees that he had loved in Herne Hill. As he succumbed increasingly to mental breakdown, Brantwood became his sanctuary. But gardens can also be a place for discovering the dark and the strange, the ‘not-I’ that must become the ‘also-I’. Think of such classic novels for children as The Secret Garden (1911) by Frances Hodgson Burnett, or Tom’s Midnight Garden (1958) by Philippa Pearce, where gardens are places for cultivating adult resourcefulness as well as roses, and confronting demons as much as those dreaded weeds. As Andrew Marvell noted, the garden is like Eden because it also contains the serpent. Now I live surrounded by the garden my husband grew up in: without a car or much alternative entertainment, this garden was his world. As he works now on the paths and hedges, on reinstating beds and lines of view, he knows that he is digging down into those memories. He laughs that he spent his childhood being nagged by his mother to help her; yet now that he is free to make the garden entirely to his own pleasing, he has unconsciously set about recreating a more pristine version of hers. This is very far from the square of paving I grew up with, but my response to it is governed by the yearnings first kindled back then. If not quite Arcadia, it has in muddy patches hints of bliss that go far beyond the sterile perfectionism of the show garden. | Emma Crichton Miller | https://aeon.co//essays/any-garden-i-love-must-be-wild | |
Architecture | At bookstores it’s easy to confuse the kids section for the nature section. Why are so many children’s books about animals? | There are times when I’m Skyping with my father that, for a moment or so, I confuse his image on the screen with mine. We are both grey-haired and bearded now, and though his facial wrinkles are more deltaic than mine, the resemblance between us is close enough to fool me briefly. After all, in my first memories of him, he was fully eight times my age. Now that gap has shrunk, and he is less than twice as old as me. But for the saving graces of some sort of Zeno’s paradox of ageing, I might catch up with him soon. My first memories of my father are of him reading to me. Or rather, they are of him reading to all of us, his children, seven pages each, in turn. In the earliest memories, there were three of us, later six. We would be in my sisters’ room, tucked in, with me at the tail end of my sister Anne’s bed. Clare, the eldest, was first, then Anne, and then me, each of us indifferent to the stories read to the others. Clare pulled on his earlobe, sucked her thumb, and listened. By the time my turn came, he was often sleepy but, if he nodded off, we prodded him back to his duties. I can still recall some of those early reads. There was Ben Ross Berenberg’s The Churkendoose (1946), an unfortunate creature, ambiguously part chicken, turkey, duck, and goose. There was Noel Barr’s Ned the Lonely Donkey (1952), the farmyard beast that does his best to make friends. There were also rhyming stories, bird books, and evocative tales of prehistoric giants. Some years later, my teacher, Mr O’Leary, would read J R R Tolkien’s The Hobbit (1937) in school as a reward for good behaviour. I was enchanted by the story, so my father bought me a copy, and it became the first to give me that distinctive pride that comes from possessing a special book. From my reading of The Hobbit I date my love of woodlands, a love that has shaped much of my life. Two decades later, I read to my eldest child from that same special copy. Those bedtime stories, read in the crevices of the day’s end, were meant to prepare us for a night of that twitching repose that passes for childhood sleep. But looking back on them now, the nightly stories also irrigated our imaginations, preparing us for the day that followed. They steadied us for the small tribulations of school, and primed us for expeditions to the outdoors of garden and neighbourhood and, during the weekends at least, our visits to the beaches of Dublin. Though we began with lonely donkeys, confused wildfowl, and rhymes about dinosaurs, some years later my father and I scrutinised nature guides together, learning the names of actual creatures and their habits. My father was always deeply interested in nature. As an amateur malacologist, or student of molluscs, he used to bring us to the beaches of the eastern Irish seaboard on Saturday mornings to search for shells. We would pile into his old Morris Oxford and spend the morning scouring the Dublin sands, looking for surf-heaved treasure, our guidebooks at the ready. Of course, mornings after storms were best, meaning that these tended to be wintry expeditions. We patrolled deserted strands under grey skies, just beyond the reach of the apocalyptic fingerlike chimneys of Poolbeg power station, which dominated Dublin Bay. I don’t recall that we were especially scientific in our collections though, in addition to guidebooks, my father had weighty monographs on the topic about the house, monographs whose wonderful illustrations we would pour over with him. To this day, I know the Latin binomials of most molluscs of the Irish coastline. Are there special books that parents should choose for the Great Indoors? Are there special ways to read them? My father kept a saltwater aquarium, which I am told is difficult to maintain. The hermit crabs, my favourites, mostly kept to themselves — for such are the ways of crabs — though they would come out to devour morsels of ham. Once a year, the family bath was repurposed as a tank for raising tadpoles. The first truly scientific text I read, in fact, was on the life cycle of the frog. As their frog legs emerged, we would provide lollipop sticks as floating islands and they would crawl out of the shallow water upon them, recapitulating the first moments of terrestrial life. They, too, had a taste for ham. I once saw a frog emerge from our back garden and look at me as if trying to place a memory, before leaping into the street beneath an oncoming car. We kept pets and studied books on their maintenance. We had budgies, rabbits, and a tortoise named Bert. Bobby the budgie had the somewhat understandable, though wholly unforgivable, habit of clinging fiercely to his perch on one’s finger while taking a shit. His little pink feet would burn with the evacuatory strain. I don’t recall this problem mentioned in Enid Blyton’s Adventure series, books that I had devoured years earlier, where Jack’s parrot Kiki was more a helpful conversationalist than a muttering mess-maker. Bert the tortoise was a special favourite of my mother’s. He would scamper, to the best of his ability, to meet her, his nails clicking on the pavement like a nervous lover tapping on a window pane. Bert liked to have his throat scratched: he would extend his head upon the improbable stalk of his neck towards her, and my mother would oblige. He went missing one late autumn. We assumed, based on our reading of a book on tortoise care, that he had hibernated in the garden. When we found him much later, our hearts were broken, as it was clear he had upturned himself and died on his back, beyond the help of the family that cared for him. Bobby’s passing was also a torment. He was slaughtered, in his birdcage, by a cat. I did have one defence against these dismal existential events. By the time of the flattened frog, and Bert and Bobby’s sad demise, I’d been mainlined dozens of nature books that filled my head with their many details about the sterner aspects of ecological life. I expected to find death in nature. When people ask me what experiences made me want to be an environmental scientist, I usually think first of adventures with pets, shell-collecting along Dublin’s strands, maintaining the aquarium with my father, and much later, the college summers I spent collecting insects in Ireland’s national parks. But it seems clear to me now that time spent indoors, reading and being read to, had an equally powerful affect on me. Reading introduced me to nature — the sort of ordinary but wholly involving nature I encountered right outside my door. I’ve been thinking about the environmentally salutary implications of children’s books a lot lately, and not only for their value in minting the next generation of naturalists. Richard Louv’s Last Child in the Woods (2005) has launched a movement encouraging this more sedentary generation of children to get out of doors, but I wonder if there might also be an environmental benefit to be gained by fortifying those intimate indoor moments when parents read to their children. Are there special books that parents should choose for the Great Indoors? Are there special ways to read them? Having now spent some time examining the content of contemporary children’s bookshelves — visiting the local library, compiling and analysing lists of children’s classics, chatting with friends and neighbours who have small children — I have come to the conclusion that reading about nature might be simply unavoidable, since it is hard to find kids’ books that are not about our furred and feathered friends, or their prehistoric ancestors. Of course, in an important sense, every book ever written is about nature. Even a writer as arcane and minimalist as Samuel Beckett knew he was reflecting on the environment. In Beckett’s novel The Unnamable (1953), the eponymous narrator is alone, despite his promise that ‘I shall not be alone, in the beginning.’ He goes on: ‘I am of course alone. Alone. Things have to be soon said. And how can one be sure, in the darkness?’ Beckett’s story could not be more spare, more replete with loneliness, hopelessness, emptiness and despair. But despite the stripped-down nature of the story, The Unnamable is essentially a meditation on nature: the human body and its physical and social needs, and the natural world as conjured up by mere utterance. ‘[I]n the world of nature, the world of man,’ the narrator asks, ‘where is nature, where is man, where are you, what are you seeking’. To those readers who find this work uninterpretable, John Calder, Beckett’s publisher, asks them to consider ‘how well they understand not only their own lives, but what they see when they look out at the world; how they interpret what they see, little of which could be understood anyway’. The Unnamable is thus not only about nature but is itself like an object of nature, simultaneously presenting itself and receding from human meaning. If this is the case, then the canon of nature writing could be broadened. In the end, it might be more difficult to decide which great novels aren’t environmental classics. I recently examined the lists of the top novels of the previous century to contrast the prevalence of nature as a theme in adult literature and in books for children. And indeed, given time and ingenuity, one can make the case for most novels on the adult list being a little tinged by green. But, I must confess, reach too far down the Modern list and an attempt to find the environmental significance becomes strenuous. In the end, one must concede that there aren’t that many adult novels that qualify as explicitly environmentally themed. One that certainly qualifies is Jack London’s The Call of the Wild (1903): the hero is a dog named Buck, who initially lives comfortably in California, but who is sold as an Alaskan sled dog, and adaptively sheds his domesticated traits. George Orwell’s Animal Farm (1945) qualifies too, though presumably we readers know that his primary purpose is to tutor us on fascism, not induct us into the ways of the barnyard. Potter balked at Toad combing his hair, complaining that it was a ‘mistake to fly in the face of nature’ Yet there are no great strains of interpretation needed to find nature in children’s literature. I performed the same analysis of comparable lists of the best children’s literature, including an especially helpful catalogue provided by TeachersFirst, a free online resource for educators in the US. I reviewed the titles in every age category and scored them for their environmental relevance. Being the father of two children, I knew many of them already, but I also reviewed those titles that were new to me. I found that a full 100 per cent of books recommended for preschoolers are environmentally themed. And not in the way that Beckett’s The Unnamable or Max Beerbohm’s Zuleika Dobson (1911) are environmental. No, these titles include The Very Hungry Caterpillar (1969) by Eric Carle which is quite simply about a very hungry caterpillar. Brown Bear, Brown Bear, What Do You See? (1967), written by Bill Martin Jr and illustrated by Carle, is about what the said bear and other animals see. The Rainbow Fish (1992) by Marcus Pfister is about the development of social behaviour in a very colourful fish. And The Runaway Bunny (1942), written by Margaret Wise Brown and illustrated by Clement Hurd, is about a rabbit tempted to bolt from home and his mother who is determined to follow him. Nature is everywhere in the preschooler canon. The proportion seems to slip steadily as children get older. Sixty per cent of the 36 books recommended for four- to eight-year-olds feature animals, or are in other ways concerned with nature. For the nine-to-12 age group, it’s just over 50 per cent. However, it’s fair to say that all of the (admittedly much smaller selection of) books for young adults could be described as promoting environmentally sensibilities in its readers. These include the aforementioned ecologically rich The Hobbit; Summer of the Monkeys (1976) by Wilson Rawls, in which a young boy attempts to return chimps to a travelling circus; and The Cay (1969) by Theodore Taylor, a survival tale set in the Caribbean Sea. When I look back at it now, my father’s choice of reading material for us was not simply an expression of his own inclinations. Sticking to the classics, it would have been impossible for him to avoid reading to us about nature. Although children’s books are emphatically nature-themed, the animals in them are anthropomorphised. Only rarely do popular books written for the youngest children provide accurate natural history information. The caterpillar, brown bear, fish, and bunny in children’s books do not behave in species-appropriate ways. One doubts, for instance, that in ordinary circumstances a colourful fish would be overly concerned with the hurt feelings of friends, nor would that fish, I suppose, engage an octopus as a life coach. It would seem that animals give voice to an adult world that wants to inculcate children with commendable virtues. Therefore, might animals play a starring role in children’s book independent of children’s particular interest in animals? After all, how better to socialise the young human animal than with tales of other well-behaved animals? In this model, as the child gets older, and becomes more successfully acculturated, there is less of a need to invoke our animal pals, which explains why we see nature fading out of children’s literature as they grow older. There is now, however, compelling evidence that children’s interest in animals might reflect innate desires of their own, rather than some adult indoctrination scheme. The ecopsychologist Olin Eugene Myers from Western Washington University has written that, for children, ‘the animal emerges … as a truly subjective other whose immediate presence is compelling’. Vanessa LoBue and her colleagues at Rutgers University and the University of Virginia published a researcher paper in 2011 showing that children under four responded preferentially to live animals — fish, hamsters, snakes, and spiders — than to ‘interesting’ toys. The children gestured more frequently to the animals, talked more to them, and asked more questions about them, and parents encouraged this interest. Whether or not children’s identification with animals is artificially manufactured by parents, has some innate basis, or — as seems most likely — is a combination of both nature and nurture, the inclusion of animals in tales written for the purposes of instruction is an old habit. Animals perform their roles as moral educators in contemporary children’s books much in the way that they did in Aesop’s time. One of the first children’s books, John Newbery’s A Little Pretty Pocket-Book (1744), provides several fables in which animals feature. By the turn of the 20th century, anthropomorphic animals had become very popular. The Tale of Peter Rabbit (1902) by Beatrix Potter featured animals wearing clothing and The Wind in the Willows (1908) by Kenneth Grahame continued the trend, although Potter balked at Toad combing his hair, complaining that it was a ‘mistake to fly in the face of nature’. And in today’s world of Pixar and Disney, the pattern of anthropomorphising animals for purposes of moral instruction continues unabated. Often, parents might not know enough to answer the questions that arise from even the simplest of children’s books There is strong evidence that having animals in a child’s life is important for that child’s moral development. A connection with live animals can increase a child’s empathy and has been the basis for a number of important children’s programmes, and even used as part of therapies for troubled children. But, so far, there has been much less attention given to the significance of fictional creatures in a child’s life. At the very least, it seems highly likely that reading stories to children about nature provides an opportunity to foster an environmental ethic. Researchers who examine reading aloud to children have shown that having open-ended discussions about the material is crucial to the effectiveness of bedtime stories. According to a 2011 paper by the early education researchers Xenia Hadjioannou of Penn State University and Eleni Loizou of the University of Cyprus: ‘True booktalks involved interactions that were in many ways reminiscent of the kinds of conversations groups of adult readers have when talking about a book: all participants work together in thinking and trying to make sense of the book through explorations, wonderings, connections, and affective responses.’ But even if all parents read stories to their children, and discuss them patiently, not every parent is environmentally literate. As what is now the US National Environmental Education Foundation stated in their report ‘Environmental Literacy in America’ (2005): ‘Most Americans believe they know more about the environment than they actually do.’ They reported that about 80 per cent of Americans rely upon incorrect or outdated myths about the environment. Often, parents might not know enough to answer the questions that arise from even the simplest of children’s books. Since most of the books we read to our children are environmentally themed, it is clear that, without improvement in environmental literacy, parents are squandering one of the greatest opportunities they have to cultivate in their children a love for the natural world that we all depend upon. Sometimes I spot my children walking in our hometown of Evanston, Illinois, or slightly further afield, for example, ambling down a Chicago street, and for a moment or two I see them merely as young men out among the people of the world. Seeing them grown up makes me proud. The older of the two is a mathematics and philosophy major, the younger is a senior in high school and a writerly sort. Just as their grandfather did with me, I read to them almost every night of their childhood. In fact, they were read to until they were adolescents. After all, we had all of the Lord of the Rings to complete (my father had left me to my own devices before we got that far). Though I adopted some of my father’s interests in the natural world and turned this into my profession, the fact that my boys are not likely to carry on this vocation is unimportant to me. What is important is that the stories we read, and the conversations my boys had about these books with their mother and me, seem to have had some bearing on their growth as the interesting, informed, ethically sound young men that they are. They are both tender with animals and with the natural world that surrounds them. Though they share their generation’s partiality for being indoors, nevertheless they also incline towards sustainable living, though they might not use that exact term. The youngest, named Oisín, recently walked three miles in a tuxedo to his junior prom, resisting my offer to put him in a cab. All parents have their own reasons for reading to their children, but surely most suppose that it will make them smarter people, better communicators, and more ethically inclined. Many will hope, no doubt, that their children will care for both the people and the creatures of the world that surrounds them. But to achieve these objectives, parents might need to do some reading themselves, so that their children could learn something more than good manners from the enchanted beasts of bedtime. | Liam Heneghan | https://aeon.co//essays/what-do-our-children-learn-from-the-very-hungry-caterpillar | |
Ethics | Can America face up to the terrible reality of slavery in the way that Germany has faced up to the Holocaust? | On 22 December 2012, the distinguished African-American film director Spike Lee tweeted: ‘American Slavery Was Not A Sergio Leone Spaghetti Western. It Was A Holocaust.’ He ignited a small storm in the US media, saying that he would not see Quentin Tarantino’s new film Django Unchained because it was an insult to his ancestors. Less than a month later, at the Berlin premiere of the movie, Tarantino himself declared that American slavery was a Holocaust. The German media chided him for his ‘provocative’ and ‘exaggerated’ remarks, but concluded that it was the sort of thing audiences in Germany — where he is extremely popular — have come to expect from Tarantino. If someone had predicted a year ago that I would find myself writing about a Tarantino film, I would have bet a large sum against it. I didn’t even want to see one. Because it was the subject of intense discussion over issues I care about, I had dragged myself to see his previous film, Inglourious Basterds (2009), and found it bearable, but I had no interest in his other work. Tarantino seemed to suggest that you can revel in every form of violence and exploitation so long as they are depicted with skill and plenty of irony; you can take gun-dealing — arguably the lowest form of human occupation — and make it seem hip and sexy. Aren’t moral objections to this sort of thing simply uncool? But my view of Tarantino changed profoundly as I watched Django Unchained the night it opened at my local cinema in Berlin. For the film is spiked with complex references that made clear how profoundly Tarantino had been influenced by German attempts to come to terms with the shame of its criminal past. Since those attempts are not well-known to a wider American or British public, it is important to sketch them — not only to understand Tarantino’s latest work, which is barely intelligible without that background, but to address the broader questions of what other nations can learn from Germany’s struggles to address its own historical guilt. Germans have been wrestling with the question of history and guilt for more than 60 years now. Their example makes clear just how many moral questions a serious contemplation of guilt must raise for America. These include what constitutes guilt, what constitutes responsibility, and how these are connected. A common slogan of second-generation Germans has been: ‘Collective guilt, no! Collective responsibility, yes!’ But the question of what responsibility entails has been politically fraught. Does taking responsibility for a violent history demand an eternal commitment to pacifism? Or to supporting the government of Israel whatever it does, as some argue? Or rather to supporting the Palestinian people whatever they do, as others have claimed? Working through Germany’s criminal past involved confronting one’s own parents and teachers and calling their authority rotten Contemporary Germans understand collective responsibility as meaning a commitment to avoiding in the future the sins their fathers and grandfathers committed in the past — but this raises fresh moral tangles. Hitler, Himmler, Goebbels and a few others offer clear cases of guilt and responsibility: they planned and carried out crimes with malice and forethought. What about those who didn’t plan them, but merely carried them out without much thought of any kind? Were those who signed orders on desktops more guilty (because further up in the hierarchy) than the guards who herded naked Jews to their deaths? Or is a human being who is capable of doing that to another human being more depraved than a bureaucrat such as Eichmann, who claimed that watching a mass execution made him sick? And what about the voters who put the Nazis in power, hoping it would stop the inflation, streetfighting, and general chaos that threatened to engulf the Weimar Republic? Of course, there are those who say they worked with the Nazis in order to prevent worse things that might have happened had less scrupulous people done their jobs. There were many of these, ranging from the Jewish councils who helped prepare the lists for deportation to the State secretary of the Foreign Office, Ernst von Weizsäcker, who was instrumental in a host of crimes, but successfully argued at his trial that anyone else would have done worse. These are just a few of the moral questions that cannot be avoided when you begin to examine real, historical cases of evil. Hannah Arendt tried to tackle them, with the result that her book Eichmann in Jerusalem (1963) was surely the most vilified work of 20th century moral philosophy. Her careful attempt to understand forms of responsibility and to disentangle responsibility from intention was misunderstood by nearly everyone, and created furore and fury even among those who had been her close friends. Perhaps it’s not surprising that so many moral philosophers since have preferred to stick to trolley problems. The German language has a word for coming to terms with past atrocities. Vergangenheitsbewältigung came into use in the 1960s to mean ‘we have to do something about our Nazi past’. Germany has spent much of the past 50 years in the excruciating process of dealing with the country’s national crimes. What does it mean to come to terms with the fact that your father, even if not a passionate Nazi, did nothing whatever to stop them, watched silently as his Jewish doctor or neighbour was deported, and shed blood in the name of their army? With very few exceptions, this was the fate of most Germans born between 1930-1960, and it isn’t a fate to be envied. Working through Germany’s criminal past was not an abstract exercise; it involved confronting one’s own parents and teachers and calling their authority rotten. The 1960s in Germany were more turbulent than the 1960s in Paris or Prague — not to mention Berkeley — because they were focused not on crimes committed by someone in far-off Vietnam but considerably closer to home, by the people from whom one had learnt life’s earliest lessons. The process of Vergangenheitsverarbeitung functioned quite differently in the Eastern and Western zones. Nazi propaganda had been far more interested in stirring fear of the ‘Bolshevist Jewish menace’ than of any other enemies, so when the Red Army advanced towards victory in Berlin in 1945, millions of Germans moved west to escape them. Those who had been committed Nazis, or simply knew something of the 20 million Soviet citizens that the German troops had killed, were understandably afraid of becoming targets of revenge. All of this meant that, when the guns stopped firing on 8 May 1945, more Nazi sympathisers lived in the West than in the East. The country was divided into occupied zones, each ruled by a particular Allied military, while the Allies considered what to do with 74 million people who had committed, condoned, or ignored some of the worst crimes in human history. The Soviet and western Allies managed to co-operate long enough to carry out the Nuremberg Trials, which convicted a few of the most prominent war criminals. Both also instituted plans for re-education, which came to be known as ‘denazification’. Generally, the Soviets looked to German high culture as a source of inspiration, promoting theatre productions of the 18th-century philosemitic play Nathan der Weise (‘Nathan the Wise’) by the Enlightenment dramatist Gotthold Ephraim Lessing, while Americans leant towards lectures about freedom and democracy. The generation that had fought the war refused to talk about the past While neither effort was particularly effective, the process of denazification was promoted in the East thanks to the fact that hundreds of German communists were ready to return from exile to form the country’s leadership. The new German Democratic Republic of East Germany, created from the Soviet-governed zone in 1949, considered itself anti-Nazi. It expressed this by symbolically renaming streets, reshaping the city’s architecture along with its lesson plans, and commissioning a new national anthem, Auferstanden aus Ruinen (‘Risen from the Ruins’). Without a leadership that was committed to confronting the nation’s crimes, the population of the Federal Republic of West Germany was even less willing to take up the mantle. With its cities still in ruins, its citizens — still reeling from the loss of sons and husbands on the front — were inclined to think of themselves as the war’s biggest victims. Not enough that the devastation of the war was evident on every street corner; on top of that, the occupying armies insisted that it was the Germans’ own fault! A few young intellectuals and artists agreed with the Allied perspective, and produced important works such as the film The Murderers Are Among Us (1946) and books by authors from the literary association Gruppe 47, whose members included the later Nobel laureates Heinrich Böll and Günter Grass. But neither the majority of German citizens nor their American overseers desired critical engagement with the Nazi period, nor sought to address the fact that the schools, courtrooms and police stations of the Western zone were still largely staffed by former Nazis. For the Cold War began before the Second World War ended, and the US president Harry S Truman’s administration was far more interested in undermining the Soviet Union than in rooting out former Nazis. The 1950s and early ’60s offered little change. With all energies focused on rebuilding the economy, and most traditional authoritarian structures left intact, the generation that had fought the war refused to talk about the past. Accounts differ about when the silence began to break. Was it the series of radio programmes on anti-Semitism produced by the philosopher Margherita von Brentano? Or Rolf Hochhuth’s 1963 play Der Stellvertreter (‘The Deputy’) about the Pope’s complicity in the Holocaust? Or was it the Eichmann trial of 1961 together with the Auschwitz trial of 1963, which drew public attention and left major writing in their wakes? What’s undisputed is that, by 1968, young Germans including the future foreign minister Joschka Fischer were throwing rocks at police whom they considered to be not only the agents of present evils, but standing in a direct line with those responsible for evils past. By 1968, few of those most directly responsible for Nazi crimes were running the show. But anyone who was staffing positions in the army, the police force, the intelligence service and the foreign ministry, among others, had been, at the very least, trained by former Nazi officials. It’s still sometimes thought that the Nazis appealed to illiterate mobs, a view unfortunately suggested by Bernhard Schlink’s dreadful book Der Vorleser (1995) and the subsequent movie The Reader (2008). In fact, the highest proportion of Nazi party members came from the educated classes. Without the sort of denazification that neither the Federal Republic nor its occupier were willing to undertake during the Cold War, there was no one initially available to staff leading institutions but old Nazis. An old joke illustrates the problem. A former émigré arrives at Frankfurt airport and asks the first stranger he meets if he had been a Nazi. ‘Not me,’ says the stranger. The émigré asks the next man. ‘Heaven forbid!’ he replies. ‘I was always inwardly opposed to them.’ Finally, the émigré meets a man who admits to having been a Nazi. ‘Thank heavens!’ says the émigré. ‘An honest man. Would you mind watching my bags while I go to the toilet?’ For the next generation it was clear that Germany’s institutions needed to be overhauled from top to bottom. The American television miniseries Holocaust (1978), though schlocky and little-noticed in the US, caused waves in Germany by exploring the ordinary human lives that lay behind the cold number of 6 million. The 50th anniversary of Hitler’s takeover was marked in 1983 in Berlin by a year’s worth of exhibits on topics as various as ‘Women in the Third Reich’, ‘Gays and Fascism’ and ‘The Architecture of Destroyed Synagogues’. Neighbourhoods vied with each other to explore their own local history. In Berlin, a play entitled It Wasn’t Me, Hitler Did It opened in 1977 and ran for 35 years. When in 1986, the right-leaning historian Ernst Nolte suggested that Hitler had learnt most of his lessons from Stalin, he was accused by the philosopher Jürgen Habermas of trying to excuse German crimes. The ensuing ‘Historians’ Debate’ raged for three years — not in academic journals but in newspaper, television and radio discussions. The prominence of the Holocaust in American culture serves a crucial function: we know what evil is, and we know the Germans did it The mid-1990s brought a fresh shock when a Hamburg research institute decided to commemorate the 50th anniversary of the war’s end with an exhibit proving that not only the SS but many ordinary Wehrmacht soldiers participated in the perpetration of war crimes. To the rest of the world this was hardly news, but the exhibit prompted unexpected protest, and was even firebombed by those who claimed it dishonoured the memory of their fallen comrades or fathers; eventually a special session of parliament was convened to discuss it. Nor has the need to rake through the Nazi period shown many signs of diminishing as the years go by. Just this spring, German viewers were offered an excellent television miniseries Unsere Mütter, unsere Väter (‘Our Mothers, Our Fathers’), depicting the ways in which four well-intentioned young people became slowly implicated in Nazi crimes. In 1999 the German parliament voted, after years of public debate, to build the official Holocaust memorial in the most prominent piece of empty space in Berlin. I prefer a more unsettling monument to the past — the thousands of Stolperstein or ‘Stumbling Stones’ that the German artist Gunter Demnig has hammered into sidewalks in front of buildings where Jews lived before the war, listing their names, and birth and deportation dates. As some opponents predicted, the uses to which the Holocaust Monument has been put are anything but appropriate. But given that the centre of Berlin has been rebuilt with bombast, a bombastic Holocaust memorial, sticking out like a stylised sore thumb amid the triumphalist architecture of the Brandenburg Gate and its surrounding embassies and institutions seems just about right. By comparison: can you imagine a monument to the genocide of Native Americans or the Middle Passage at the heart of the Washington Mall? Suppose you could walk down the street and step on a reminder that this building was constructed with slave labour, or that the site was the home of a Native American tribe before it was ethnically cleansed? What we have, instead, are national museums of Native American and African American culture, the latter scheduled to open in 2015. The Smithsonian’s National Museum of the American Indian boasts exhibits showing superbly crafted Pueblo dolls, the influence of the horse in Native American culture, and Native American athletes who made it to the Olympics. The website of the Smithsonian’s anticipated National Museum of African American History and Culture does show a shackle that was presumably used on a slave ship, but it is far more interested in collecting hats worn by Pullman porters or pews from the African Methodist Episcopal church. A fashion collection is in the making, as well as a collection of artefacts belonging to the African American abolitionist Harriet Tubman; 39 objects, including her lace shawl and her prayer book, are already available. Don’t get me wrong: it is deeply important to learn about, and validate, cultures that have been persecuted and oppressed. Without such learning, we are in danger of viewing members of such cultures as permanent victims — objects instead of subjects of history. The Jewish Museum Berlin is explicit about not reducing German Jewish history to the Holocaust. One section of the museum is devoted to it, but the rest of the permanent collection features things such as a portrait of the philosopher Moses Mendelssohn, filmed interviews with Hannah Arendt, a Jewish Christmas tree, and a giant moveable head of garlic. (Don’t ask.) The exhibit is awful, but presumably useful for those visitors whose only association with the word ‘Jew’ is a mass of gaunt prisoners in striped uniforms. In the same way, some Americans, no doubt, still need to see more than savage Hollywood Indians or caricatured Stepin Fetchit black people in order to get a more accurate picture of the cultures many of our ancestors tried to destroy. But more importantly, America’s museums of Native American and African American history embody a quintessentially American quality: we have always been inclined to look to the future instead of the past, and our museums follow suit. It’s impossible to compare what’s on display in our national showcase with what you can find in Germany without feeling that America’s national history retains its whitewash — and that a sane and sound future requires a more direct confrontation with our past. We do have one place on the National Mall that focuses on unremitting negativity: the United States Holocaust Memorial Museum. I am not the first person to ask why an event that took place in Europe should assume such a prominent place in our national symbolism — particularly when our government did little to save Jews before and during the Holocaust, yet ensured that former Nazi scientists could be smuggled into the US afterwards. The idea that it is evil to round up people and send them to gas chambers is about as close to a universal moral consensus as we get. And having a symbol of absolute evil unconsciously gives us a sort of gold standard against which most other evil actions measure only as common coin. Nazis function conveniently as place-holders of a paradigm of evil, useful to discredit opponents as varied as Saddam Hussein, Karl Rove, and Barack Obama. (It is truly terrifying to see how many pictures of Obama with a tiny moustache exist on the web.) The prominence of the Holocaust in American culture serves a crucial function: we know what evil is, and we know the Germans did it. There is, of course, a large and growing body of work done by historians, cultural critics, and others that examines more specifically American forms of evil. Few of them, however, receive the same widespread public attention or sales figures as the latest book, film or memoir about yet another aspect of the Holocaust, which lets us have our cake and eat it, too. We can spend our time pondering serious matters, give appropriate expression to our horror, and lean back in the confidence that it all happened over there, in another country. We no longer believe in bad seed or bad blood. Still, the idea that we are tainted by the sins of our fathers has a long and profound history. According to traditional Christianity, nothing we can do is enough to expiate them: we are all doomed to die for the fall of Adam and Eve, and salvation can come only after death. According to the Old Testament, we must serve some time for the sins of our fathers, unto the third and fourth generation. These traditions run deep even for those who might have rejected them, for they have a reasonable core. We all of us benefit from inheritances we did not choose and cannot change. Growing up involves deciding which part of the inheritance you want to claim as your own, and how much you have to pay for the rest of it. This is as true for nations as it is for individuals. A Vergangenheitsaufarbeitung must force emotional confrontation with the crimes it concerns, not just a rational assessment of them. This confrontation was notably missing in the first decades of the Federal Republic of West Germany, which used reparations payments to the State of Israel as a substitute for facing up to what it meant to have caused the murder of millions. The country thus assumed forms of legal responsibility without really assuming moral responsibility until the slow, fitful turmoil of the 1960s. Mutatis mutandis, something similar has happened in America. Affirmative action measures are a way of taking collective responsibility for slavery and the blackface minstrel Jim Crow, but few white Americans have been forced to face just how awful slavery was. (And few of us know just how long it continued, in one form or another. I discovered by accident, when reading a biography of Albert Einstein, that he supported a group of clergymen who visited Truman’s White House in 1946 to push Truman to make lynching a federal offence. Truman refused.) Some degree of traumatisation must take place. Facts are insufficient, and numbers often make them worse This is why the violent scenes in Django Unchained were absolutely necessary. As both Tarantino and his black stars have said, real slavery was a thousand times worse than what they showed in the film. Tarantino edited out parts of the two most brutal scenes, in which men are torn apart by fellow slaves or packs of dogs, because early audiences found them too traumatic. Still, what he left in was brutal enough; as he said in an interview with the African American historian Henry Louis Gates Jr for The Root magazine: ‘People in general have so put slavery at an arm’s distance that just the information is enough for them — it’s just intellectual. They want to keep it intellectual. These are the facts, and that’s it. And I don’t even stare at the facts that much.’ To borrow a distinction from the philosopher Stanley Cavell: if we are to acknowledge, and not merely know, the extent of our nation’s crimes, some degree of traumatisation must take place. Facts are insufficient, and numbers often make them worse. As Gates notes, many of his students have become inured to the horror of slavery. Scenes such as Tarantino’s, which stay in our imaginations longer than any argument or historical description, offer a taste of immediacy with which we must linger before we go on. To appreciate Tarantino’s intent, you have to take Inglourious Basterds seriously, which I initially did not. On first viewing, it’s easy to agree with The New Yorker’s description of the film as a baseball bat ‘applied to the head of anyone who has ever taken the Nazis, the war, or the Resistance seriously … too silly to be enjoyed, even as a joke’. However, one thing Tarantino does know well is the history of film. In one interview, he spoke of being influenced by Hollywood films of the 1940s, ‘when the Nazis weren’t a theoretical, evil boogieman from the past but were actually a threat’. Film directors of that period — often European refugees such as Fritz Lang or Billy Wilder, as Tarantino pointed out — had no compunction about making war movies that were also exciting, entertaining, and even funny. Tarantino did the same. And if his fantasy that film could rewrite history might seem a tad extravagant (‘narcissistic’ is a cruder word for it), it’s a fantasy to which many Germans thrilled. A friend who has written a series of deep, complex and nuanced books on the subject of his nation’s criminal past told me he cheered like a child when Tarantino’s Nazis burst into flames. For all his erudition, the film had tapped into buried emotions that had moved and motivated him for decades. In a recent interview for Die Zeit, Tarantino said he was always being asked what Germans thought of Inglourious Basterds. ‘If anyone in the world dreams of killing Adolf Hitler,’ he answered, ‘aside from the Jews, it’s the last three generations of Germans.’ American history, German imagination: Tarantino got both of them right. Tarantino is not the first American director to follow a major film about the Nazis with a film about American slavery. Steven Spielberg did the same when he followed Schindler’s List (1993) with Amistad (1997). Both are means for sending a message that Nazism should not be used to end discussions about evil but to begin them, and that American crimes deserve as hard a look as any other. In Django Unchained, Tarantino took it one step further than Spielberg in Amistad, by making the only decent white person in the film a German. The good guy could have been any old European, but Tarantino underlines his character’s German identity with constant references to it. And he rubs our noses in our own prejudices by using Christoph Waltz, the actor he cast to play the most memorable SS officer in film history, to be the only white person in Django who is viscerally revolted by American slavery. The German presence in Django reveals the influence of German Vergangenheitsaufarbeitung on Tarantino’s films. While filming Inglourious Basterds, he lived in Berlin for half a year, which is long enough to get a sense of how Germans keep their awful past firmly in the present consciousness. His interview with Gates in The Root reveals just how conscious the influence was: ‘I think America is one of the only countries that has not been forced, sometimes by the rest of the world, to look their own past sins completely in the face. And it’s only by looking them in the face that you can possibly work past them.’ Nor does he shy away from the most direct comparisons. If there were a Nuremberg trial, he says in the same interview, then D W Griffith, the director of The Birth of a Nation (1915) — the silent film that inspired the rebirth of the Ku Klux Klan — would be judged guilty of war crimes. And The Clansman (1905) — the book by Thomas Dixon on which that film was based — can for Tarantino ‘only stand next to Mein Kampf when it comes to its ugly imagery… it is evil. And I don’t use that word lightly.’ Some critics have questioned the appropriateness of a white director making a film about slavery, but that’s precisely the point of Vergangenheitsaufarbeitung. Tarantino has claimed that his great-grandfather was a Confederate general, which suggests that in making this film he was following in the footsteps of two generations of Germans and confronting ancestral evil. The German reviews of Django whose headlines asked: ‘Dare we compare American slavery to the Holocaust?’ generally answered ‘No.’ In an inimitable blend of pedantry and cynicism, they explained the differences between slaveholding, which had an economic purpose, and the Holocaust, which had none. They then concluded that Tarantino had used the word provocatively to promote his film. As several commentators pointed out, the deliberately inflammatory use of the word ‘Holocaust’ is music to the ears of right-wing groups and should therefore be avoided at all costs. These reviews might show the wisdom of Tzvetan Todorov’s remark that Germans should talk about the particularity of the Holocaust, the Jews about its universality (applying Kant’s idea that if everybody worried about their own virtue and their neighbour’s happiness instead of the opposite, we would come close to a moral world). But I am a little surprised that the American discussion of the film has focused more on counting the number of times the word ‘nigger’ is used than on the questions Django Unchained was meant to raise. Were Americans guilty of crimes that were as evil as those of the Nazis — and if so, what should we do about it today? In a long attack on Django Unchained, the historian Adolph Reed argues that it represents ‘the generic story of individual triumph over adversity… neoliberalism’s version of an ideal of social justice’. While I applaud Reed’s attempt to call our attention to the pervasiveness of neoliberal ideology, I’m appalled by the idea that attending to individual stories is an invalid historical approach. The insistence that every human being has his or her own story is a statement about human freedom that is lost when we assume that ‘real’ history is only a matter of political economy and social relations. After we’ve confronted the depths to which our history sank, we can — and we must — idealise those who moved it forwards. Tarantino’s heroes are as delightful as they are unbelievable; interestingly enough, his strength lies in depicting villains. Inglourious Basterds features two Nazis who are appealing, and very differently so. This is as it should be, if we are ever to understand how all kinds of ordinary, and even appealing, people commit murder, whether in Majdanek or in Mississippi. But it is equally crucial that we get our heroes right, too. Heroes close the gap between the ought and the is. They show us that it is not only possible to use our freedom to stand against injustice, but that some people have actually done so. Yet, without some cultural experience of the violence that was a part of building this country, we risk the sort of liberal triumphalist narrative we would deplore if used elsewhere. There is much to be said for the American tendency to accentuate the positive. Rather than looking at the history of Jim Crow, we turn Martin Luther King’s birthday into a national holiday and put his statue on the Mall. Yet we would be disturbed by a German lesson plan that mentioned the Holocaust as a terrible thing, and then went on too quickly to described those heroes — Willy Brandt, Sophie Scholl, Claus von Stauffenberg — who opposed it. With far too few exceptions, America’s history of freedom-fighting — from Seneca Falls to Selma to Stonewall — is doing just that. It works for inaugural speeches, so long as you emphasise, as President Obama did, that we as a nation are on a journey in which there’s still a long way to go. Meanwhile, Tarantino’s approach is an antidote to triumphalism that’s all the more effective for being a roaring good film. | Susan Neiman | https://aeon.co//essays/dare-we-compare-american-slavery-to-the-holocaust | |
Demography and migration | Romany and Traveller songs inspired a century of popular music. Now Sam Lee is searching for the last of them | The first time the London-born folk singer Sam Lee met Freda Black, he thought she was dying. It was December 2011 and he had finally secured an introduction, three years after hearing about her through a chance encounter with some of her relatives. Lee was keen to hear the traditional songs that had been passed down through generations of her Romany family, but Black wasn’t eager to sing. ‘She was lying on a sofa under a blanket,’ Lee says. ‘She was very reluctant, and then slowly she started to open out. Since I’ve known her, she’s blossomed and become stronger and stronger. Every time I come, the songs get bigger and bigger.’ One humid June afternoon, Lee fills his green Renault Kangoo with curious friends — the singer-songwriter Lisa Knapp, a music teacher named Josh Geffin, and Anna Ling, a choirleader — for another visit to the 86-year-old singer’s home. We’re headed for the small Hampshire village of Headley, two hours south-west of London, and it’s clear that there is nothing Lee would rather be doing. ‘I can’t wait to be sitting on her carpet with a cup of tea in my hand, eyes half-closed, listening to her sing,’ he says with a boyish grin. Lee is a tall, dynamic, inexhaustibly enthusiastic 33-year-old who has swept through Britain’s folk scene like a summer storm. Last year saw the release of his debut album, Ground of Its Own, which set ancient songs to innovative new arrangements. It made the shortlist of the prestigious Mercury Music Prize, and Lee has received blessings from folk elders such as Shirley Collins and the producer Joe Boyd. En route to Headley, we take a surreal detour to the country home of Pink Floyd’s Roger Waters, where Lee has to collect a bag that he left after performing at the wedding of Waters’s daughter, India. Singing folk songs takes up an increasing amount of his time, but Lee is equally committed to the much less glamorous and lucrative work of collecting them. Whenever he can, he drives into the countryside to visit Traveller encampments and ask to hear any old songs they might know. ‘That’s my community, my religion,’ he insists. The transport and recording technology might have improved but the process is fundamentally the same as it was when the folklorist Cecil Sharp cycled around Somerset with his notebooks in the early 1900s, or when father-and-son team of John and Alan Lomax traversed the American South with their primitive tape recorder in the 1930s, gathering the corpus of folk and blues without which rock music as we know it would not exist. You can hear their legacy, both directly and indirectly, in the work of Bob Dylan and Joan Baez, Led Zeppelin and the Rolling Stones, the White Stripes and Moby. These men were the bottlenecks through which the prehistory of popular music flowed. Song-collecting still takes place in other corners of the globe but in Britain it has dwindled almost to nothing, as traditional singing has waned. ‘There really isn’t anyone of my generation doing it,’ says Lee as we wind through the damp green countryside. ‘There are a few PhD students studying folk music but there’s nobody else doing the great romantic down-the-byways knocking-on-doors thing. I’d like to spend the rest of my life doing this if I could. Fuck fame and fortune. This is far more important.’ I first met Lee in January at a conference organised by his new Song Collectors Collective. We were in a freezing cold church hall near his home in Dalston, east London. The day’s penultimate session was Lee’s interview with Freda, a small but imposing figure with full brown hair, hoop earrings and a matriarchal air. She talked about how she was born in Somerset on Christmas Day 1927 (‘the same hour as our Lord’); about long days picking hops and strawberries in the summer, and selling lace and brooches in the winter; about hearing old songs from her father-in-law around the bonfire at night. ‘It was a beautiful life,’ she said, ‘a lovely life. You could go anywhere.’ When Lee queried some of her more fanciful tales, drawn from Romany lore, she replied: ‘It’s all true, my love, all of it. The truth shames the devil.’ Unaccustomed to public performance, she sang in a soft, tremulous voice, eyes closed, touching her brow as if to coax out each verse from her memory. Lee looked rapt throughout: protective and solicitous but never condescending. He asked her how many songs she knew. ‘I ain’t counted them,’ she replied with a shrug. ‘You ’ave!’ (He has indeed: so far there are 90.) He was dancing in a burlesque show in London’s West End when an album by the Watersons, the first family of British folk, caught his ear and made him hungry to learn the songs’ origins As to how a middle-class Jewish boy from north London became the foremost song collector of his generation, Lee has some ideas. He remembers being enthralled by ritual communal singing in the synagogue as a child, and being sent to visit Holocaust survivors and hear their testimonies. ‘Stories, always stories,’ he says, sitting on the breezy balcony of his flat three weeks after our trip to Headley. ‘I felt that I was from a community that really revered the elders and the history. Every time one of them died, I wish I’d listened harder.’ Lee didn’t enjoy school but he took solace in the Forest School Camps he attended during summer holidays, where children learnt outdoor skills and sang folk songs around the campfire. He was prone to musical obsessions — first Michael Jackson, then Joni Mitchell — but his love of folk took root slowly. After school, he studied art, dance and anthropology, and taught wilderness living. In 2004, he was dancing in a burlesque show in London’s West End when an album by the Watersons, the first family of British folk, caught his ear and made him hungry to learn the songs’ origins. ‘I was trying to go back, back, back,’ he says. ‘Then I found the field recordings and didn’t listen to anything else.’ While continuing to dance at night, he spent his days volunteering at Cecil Sharp House, the north London headquarters of the English Folk Dance and Song Society (EFDSS). Cecil Sharp was not Britain’s first song collector but he was the most tireless, messianic and influential. On Boxing Day 1899, this sickly, 40-year-old composer and music teacher witnessed his first traditional Morris dance and was immediately fascinated by the ritual, the music and the island history buried within. Throughout the Victorian era, printed broadsides and the rise of music hall (a variety format similar to American vaudeville) had overshadowed the oral folk tradition. And yet, like some thorny, tenacious weed, it thrived through neglect. You just had to go out and look for it. The desk-bound academic Francis James Child, had recently compiled the 10-volume folk song canon known as the Child Ballads (1882-98) without hearing the songs performed. Sharp, however, wanted to get his boots dirty. Between 1903 and 1907, he cycled around Somerset’s informally educated rural communities, finding, transcribing and preserving traditional songs. He recorded his discoveries in the pioneering book English Folk Song: Some Conclusions (1907), performed equally important work overseas in Appalachia, and founded the English Folk Dance Society (which became the EFDSS). When he died in 1924, he had collected almost 5,000 songs. By Edwardian standards, the vegetarian, socialist Sharp was progressive. By the 1970s, left-wing scholars such as Dave Harker were accusing him of making the songs serve ‘nationalistic sentiments and bourgeois values’ instead of the people themselves. It’s true that in his eagerness to revive and promote ‘the song created by the common people’, Sharp travestied their oral roots with self-penned piano accompaniments, sanitised the more risqué lyrics to suit Edwardian mores, and discounted songs that were too hybridised for his purposes. Most controversially, he ignored African-American music in Appalachia. But, as his defenders insist, without Sharp’s missionary zeal, these songs might never have been collected at all. Similar arguments have raged around song-collecting ever since: invaluable work depends on imperfect individuals. The songs don’t exist inside you, they’re around you, and when you sing you breathe the song in from the ghosts that surround you While Sharp was scouring Somerset, the slightly younger Harvard scholar John Lomax was pursuing cowboy ballads in Texas. In 1933, after a long break from collecting, he secured a book deal and returned to the road with his teenage son Alan. Unlike Sharp, the Lomaxes had the technology to make field recordings. ‘For the first time, America could hear itself,’ said Alan. One of their discoveries even became a minor celebrity: a convicted murderer and one-man treasure chest of song named Lead Belly, who gave the canon such standards as ‘Black Betty’, ‘The Midnight Special’ and ‘Where Did You Sleep Last Night?’ John Lomax’s racial preconceptions led him to celebrate the ‘primitive’ emotions in African-American songs and overlook the rest, including, for the most part, protest songs. But his son Alan, much more progressive and idealistic, grew into the greatest collector of all time. He championed folk song from around the world as a promoter, producer, scholar, author, lecturer, oral historian, radio DJ, documentary-maker and political campaigner: John Szwed subtitled his biography of Alan Lomax The Man Who Recorded the World (2010). During Alan’s time in Britain in the 1950s, the satirical magazine Punch published a cartoon in which a singer complained: ‘I’ve got those Alan-Lomax-ain’t-been-round-to-record-me blues.’ In 1964, when the folk revival he enabled was in full swing, Alan wrote in a festival programme: ‘[Folk] is perhaps our most important, serious and original contribution to world musical culture. These performers are its only carriers and they deserve to be listened to with respect and love and delight.’ In 2005, having done his homework, Lee sought out mentors, starting with an octogenarian song collector even more divisive than Sharp. Peter Kennedy had been a crucial figure in Britain’s own folk revival, working closely with Alan Lomax, but he was also a notorious copyright grabber. ‘I think he upset every person in his entire life because he was a very difficult and odd character,’ says Lee. ‘Yet I kind of identified with him as a driven, single-minded renegade: if this needs to happen, I need to do whatever I can. It was while hearing his stories that I thought, OK, this is the journey I want to take.’ A year later, Lee encountered his first authentic folk singer in suitably dramatic circumstances. After seeing the celebrated Scottish balladeer Stanley Robertson perform at a festival in the northern coastal town of Whitby, he followed him out of the venue, down to the harbour, and up 199 steps to the top of a windswept cliff. There he found the formidable 66-year-old clutching his weak heart beneath a floodlit whalebone arch. Lee introduced himself. ‘I know a thousand ballads!’ Robertson roared. Lee, awestruck, said that he wanted to learn them all. A few months after this meeting, Robertson suffered a serious heart attack, coming very close to death. ‘I think he travelled back to his elders and retrieved the songs he heard as a child,’ Lee says. ‘That was when he said, OK Samuel, this is how we’re going to do it.’ Robertson took Lee to a fabled highway outside Aberdeen where he ceremonially inducted him into the song-carrying tradition, handing him a ring, a drinking vessel and a talismanic pebble, the last of which Lee still carries with him. ‘Stanley was on another level of spiritual and cultural enlightenment,’ Lee marvels. ‘He spoke Romany and Traveller cant [dialect]. He had psychic abilities that were unparalleled. He knew about astral travelling.’ I sound a note of atheist scepticism and Lee smiles. ‘I was exactly like you,’ he says. ‘For Jews, death is death: there’s no afterlife. But the psychic behaviour was irrefutable. Everything that’s happened to me, he predicted.’ After Robertson’s death in 2009, Lee heard how his spirit continued to visit surviving relatives, who would tell him: ‘Fuck off, Stanley, you’re deid! I cannae sleep!’ For Robertson, singing itself was a kind of psychic communion with the dead. ‘Stanley called it the maizie,’ Lee says. ‘This quality of bringing in the ancestors. The songs don’t exist inside you, they’re around you, and when you sing you breathe the song in from the ghosts that surround you.’ Robertson was Lee’s introduction to Traveller culture. He felt instantly at home. ‘They’re kind of the American Indians in Britain: indigenous people with their own language, customs, music, spirituality, relationship to the land. I immediately had this affinity. I love the danger, the risk, the tempestuousness, and the way they wear their hearts on their sleeves.’ It’s not about taking it and selling it on to a London folk crowd, but the restoration and conservation of a community Indigenous Travellers are ethnically distinct from the Romany, whose ancestors hailed from India. Each group has its own languages and religious beliefs, but they are often jointly referred to as Travellers or Gipsies. In modern Britain, regardless of their roots, all nomadic communities live a life under siege. Since the 1980s, consecutive governments have regarded them as an archaic nuisance to be corralled and neutered. Local councils have withdrawn legal stopping places for Travellers, while rarely granting planning permission for them to build their own homes. The resulting squeeze traps Travellers on reservations or forces them into council houses, estranged from their roots and plagued by ill health. ‘In every possible way, the government has crushed the culture,’ Lee says angrily. ‘The Gipsies fought in both wars, they worked the land, and then, when they weren’t needed, they were basically told to fuck off.’ On some of his song-collecting visits, Lee has been told to fuck off too, by suspicious young male Travellers. More often, people are surprised that anybody cares. Traditional singing has faded from the culture. On occasion, he finds himself ‘repatriating’ songs. He has visited Traveller homes where children hear their grandmother sing for the very first time. He has met non-singing families and sung their own heritage back to them. He says he records songs as much for these communities as for musicians, scholars or folk fans. ‘The actual record itself is sometimes insignificant but the act of recording for that person is so important,’ he says. ‘It’s about listening. Someone’s paying attention to something that’s important in their culture but has disappeared.’ Song collectors have often been accused of fetishising the past and valuing living singers mostly as conduits to the dead. But modernity thunders on regardless, and collecting serves as an important act of remembrance. In marginal communities, preserving this heritage even qualifies as a political act: the past might not have been better but it was there, and should not be forgotten. Folk songs don’t have titles, so each English-language example is assigned a number in the Roud Folk Song Index. Begun in 1993 by the former librarian Steve Roud, the index now contains around 200,000 references to nearly 25,000 distinct songs. The first is a Child Ballad called ‘The Gypsy Laddie’, which has been sung in countless ways by countless singers, including Woody Guthrie, Bob Dylan and the White Stripes. Taxonomically, historically and spiritually, it all starts with the Travellers. Collectors are partly driven by a sense that time is running out. Cecil Sharp complained that ‘the 20th-century collector is a hundred years too late’, and another hundred have passed since then. Lee is haunted by the thought of songs that have been lost forever. Well aware that earlier collectors often rewrote or ignored songs they didn’t feel were worthy of preservation, he records everything, whether he likes it or not. Who knows what future generations will value? He thought one of Freda Black’s bawdier songs was a ‘naff, twee little thing’ until fellow collectors told him it was his most important discovery because it had never been recorded before. ‘You get these rogue species — like dinosaurs who existed in just one valley for thousands of years,’ says Lee with wonder. It’s hard to find ‘new’ old songs but Lee maintains that the process itself — the travelling, the talking, the singing — matters more. Unlike some of his predecessors, he is sensitive to the ethical complexities of collecting, and is conscientious about crediting and celebrating his sources. ‘It’s not about taking it and selling it on to a London folk crowd, but the restoration and conservation of a community,’ he says. ‘The music is owned by them. They’re the custodians and forever should be.’ As a musician, however, Lee is increasingly interested in putting his personal experience into his own versions. Cecil Sharp maintained that there was no such thing as a ‘pure’ English folk song: they cross centuries and continents and each rendering has its own authenticity. Why shouldn’t Lee continue that process? ‘I have to say, I don’t like much folk music,’ he announces. ‘I get fed up with guitar and fiddle work: this bland replication of what folk music should sound like. Folk music is raw material. Lyrics and melody, that’s all you’ve got, and even those are up for negotiation. After that, you can do what the hell you like.’ He knows that the more artistic liberties he takes, the more feathers he will ruffle. ‘A lot of people probably have a really bad impression of me as this opinionated upstart who came in with an attitude and didn’t pay respect,’ he says cheerfully. ‘And they’re probably right.’ When we arrive at Freda Black’s small, tidy house on a boxy estate in Headley, she is sitting by the rain-lashed window in a purple, floral-patterned dress, watching an afternoon quiz show. She has been living here since 1978, when her husband had a heart attack at the wheel of his car while she sat next to him, leaving her with just £8 in the world. She still misses the road. ‘I love her,’ says Lee as he parks the car. ‘I’d love her whether she sang or not. In some ways, I don’t ever need to see her again. I’ve got everything I need. But it’s not about that. I love learning about her relationship with the songs and how it changes.’ This time, he comes bearing gifts. ‘I have two CDs of this very curious singer,’ he says. ‘You may have heard of her. She sings lots of old songs. She’s called Freda Black.’ Black laughs delightedly. ‘The name rings a bell.’ At first, she is coy around her new visitors, insisting that ‘I’m not so good a singer now.’ Once she starts singing, however, she can’t stop. For three hours, out pour brutal, beautiful tales of war, betrayal, suicide, murder and doomed love. She sings songs as well-known as the ‘Child Ballad Barbara Allen’, the most collected song in the English language, and as eerie and precise as ‘The Hindhead Murder’, about the killing of a sailor in 1786. Lee and his fellow musicians tease out anecdotes, called out requests, and compare different versions. Black seems surprised and a little annoyed that some of the songs she thinks of as hers are actually widely known. When Lee plays an alternative version on his laptop, she frowns, as if the other singer has got it wrong. ‘My people made these songs themselves based on true facts,’ she says proudly. ‘If someone died in the family, they’d make a song about it.’ She begins singing ‘The Bonny Bunch of Roses’, a broadside about the fall of Napoleon, starting midway through. Lee asks her to start from the beginning. ‘I shan’t sing at all if you keep butting in,’ she scolds. But she obliges, closing her eyes and rocking back and forth as she breathes life into a story 200 years old. Lee leans forward on his knees, silently mouthing the words. The only other sounds are the rain on the window and the ticking of a carriage clock. Lee asks her if she’d still be singing these songs if he hadn’t turned up at her door 18 months ago. ‘I never took no notice of them,’ she says. ‘It’s nice to hear them again.’ And she begins to sing once more, summoning ghosts. | Dorian Lynskey | https://aeon.co//essays/for-britain-s-travellers-songs-can-still-summon-ghosts | |
Cognition and intelligence | Our instincts for privacy evolved in tribal societies where walls didn’t exist. No wonder we are hopeless oversharers | In October 2012 a woman from Massachusetts called Lindsey Stone went on a work trip to Washington DC, and paid a visit to Arlington National Cemetery, where American war heroes are buried. Crouching next to a sign that said ‘Silence and Respect’, she raised a middle finger and pretended to shout while a colleague took her photo. It was the kind of puerile clowning that most of us (well me, anyway) have indulged in at some point, and once upon a time, the resulting image would have been noticed only by the few friends or family to whom the owner of the camera showed it. However, this being the era of sharing, Stone posted the photo to her Facebook profile. Within weeks, a ‘Fire Lindsey Stone’ page had materialised, populated by commentators frothing with outrage at a desecration of hallowed ground. Anger rained down on Stone’s employer, a non-profit that helps adults with special needs. Her employers decided, reluctantly, that Stone and her colleague would have to leave. More recently, Edward Snowden’s revelations about the panoptic scope of government surveillance have raised the hoary spectre of ‘Big Brother’. But what Prism’s fancy PowerPoint decks and self-aggrandising logo suggest to me is not so much an implacable, omniscient overseer as a bunch of suits in shabby cubicles trying to persuade each other they’re still relevant. After all, there’s little need for state surveillance when we’re doing such a good job of spying on ourselves. Big Brother isn’t watching us; he’s taking selfies and posting them on Instagram like everyone else. And he probably hasn’t given a second thought to what might happen to that picture of him posing with a joint. Walls are a relatively recent innovation. Members of pre-modern societies happily coexisted while carrying out almost all of their lives in public view Stone’s story is hardly unique. Earlier this year, an Aeroflot air hostess was fired from her job after a picture she had taken of herself giving the finger to a cabin full of passengers circulated on Twitter. She had originally posted it to her profile on a Russian social networking site without, presumably, envisaging it becoming a global news story. Every day, embarrassments are endured, jobs lost and individuals endangered because of unforeseen consequences triggered by a tweet or a status update. Despite the many anxious articles about the latest change to Facebook’s privacy settings, we just don’t seem to be able to get our heads around the idea that when we post our private life, we publish it. At the beginning of this year, Facebook launched the drably named ‘Graph Search’, a search engine that allows you to crawl through the data in everyone else’s profiles. Days after it went live, a tech-savvy Londoner called Tom Scott started a blog in which he posted details of searches that he had performed using the new service. By putting together imaginative combinations of ‘likes’ and profile settings he managed to turn up ‘Married people who like prostitutes’, ‘Single women nearby who like to get drunk’, and ‘Islamic men who are interested in other men and live in Tehran’ (where homosexuality is illegal). Scott was careful to erase names from the screenshots he posted online: he didn’t want to land anyone in trouble with employers, or predatory sociopaths, or agents of repressive regimes, or all three at once. But his findings served as a reminder that many Facebook users are standing in their bedroom naked without realising there’s a crowd outside the window. Facebook says that as long as users are given the full range of privacy options, they can be relied on to figure them out. Privacy campaigners want Facebook and others to be clearer and more upfront with users about who can view their personal data. Both agree that users deserve to be given control over their choices. But what if the problem isn’t Facebook’s privacy settings, but our own? A few years ago George Loewenstein, professor of behavioural economics at Carnegie Mellon University in Pittsburgh, set out to investigate how people think about the consequences of their privacy choices on the internet. He soon concluded that they don’t. In one study, Loewenstein and his collaborators asked two groups of students to fill out an online survey about their lives. Everyone received the same questions, ranging from the innocuous to the embarrassing or potentially incriminating. One group was presented with an official-looking website that bore the imprimatur of their university, and were assured that their answers would remain anonymous. The other group filled out the questions on a garishly coloured website on which the question ‘How BAD Are U???’ was accompanied by a grinning devil. It featured no assurance of anonymity. Bizarrely, the ‘How BAD Are U???’ website was much more likely to elicit revealing confessions, like whether a student had copied someone else’s homework or tried cocaine. The first set of respondents reacted cautiously to the institutional feel of the first website and its obscurely concerning assurances about anonymity. The second group fell under the sway of the perennial youthful imperative to be cool, and opened up, in a way that could have got them into serious trouble in the real world. The students were using their instincts about privacy, and their instincts proved to be deeply wayward. ‘Thinking about online privacy doesn’t come naturally to us,’ Loewenstein told me when I spoke to him on the phone. ‘Nothing in our evolution or culture has equipped us to deal with it.’ When a boy hit puberty, he disappeared into the jungle, returning a man. In today’s digital culture this is precisely the stage at which we make our lives most exposed to the public gaze We might be particularly prone to disclosing private information to a well-designed digital interface, making an unconscious and often unwise association between ease-of-use and safety. For example, a now-defunct website called Grouphug.us solicited anonymous confessions. The original format of the site was a masterpiece of bad font design: it used light grey text on a dark grey background, making it very hard to read. Then, in 2008, the site had a revamp, and a new, easier-to-read black font against a white background was adopted. The cognitive scientists Adam Alter and Danny Oppenheimer gathered a random sample of 500 confessions from either side of the change. They found that the confessions submitted after the redesign were generally far more revealing than those submitted before: instead of minor peccadilloes, people admitted to major crimes. (Facebook employs some of the best web designers in the world.) This is not the only way our deeply embedded real-world instincts can backfire online. Take our rather noble instinct for reciprocity: returning a favour. If I reveal personal information to you, you’re more likely to reveal something to me. This works reasonably well when you can see my face and make a judgment about how likely I am to betray your confidence, but on Facebook it’s harder to tell if I’m trustworthy. Loewenstein found that people were much readier to answer probing questions if they were told that others had already answered them. This kind of rule-of-thumb — when in doubt, do what everyone else is doing — works pretty well when it comes to things such as what foods to avoid, but it’s not so reliable on the internet. As James Grimmelmann, director of the intellectual property programme at the University of Maryland, puts it in his article ‘Facebook and the Social Dynamics of Privacy’ (2008): ‘When our friends all jump off the Facebook privacy bridge, we do too.’ Giving people more control over their privacy choices won’t solve these deeper problems. Indeed, Loewenstein found evidence for a ‘control paradox’. Just as many people mistakenly think that driving is safer than flying because they feel they have more control over it, so giving people more privacy settings to fiddle with makes them worry less about what they actually divulge. Then again, perhaps none of this matters. Facebook’s founder Mark Zuckerberg is not the only tech person to suggest that privacy is an anachronistic social convention about which younger generations care little. And it’s certainly true that for most of human existence, most people have got by with very little private space, as I found when I spoke to John L Locke, professor of linguistics at Ohio University and the author of Eavesdropping: An Intimate History (2010). Locke told me that internal walls are a relatively recent innovation. There are many anthropological reports of pre-modern societies whose members happily coexisted while carrying out almost all of their lives in public view. You might argue, then, that the internet is simply taking us back to something like a state of nature. However, hunter-gatherer societies never had to worry about invisible strangers; not to mention nosy governments, rapacious corporations or HR bosses. And even in the most open cultures, there are usually rituals of withdrawal from the arena. ‘People have always sought refuge from the public gaze,’ Locke said, citing the work of Paul Fejos, a Hungarian-born anthropologist who, in the 1940s, studied the Yagua people of Northern Peru, who lived in houses of up to 50 people. There were no partitions, but inhabitants could achieve privacy any time they wanted by simply turning away. ‘No one in the house,’ wrote Fejos, ‘will look upon, or observe, one who is in private facing the wall, no matter how urgently he may wish to talk to him.’ The need for privacy remains, but the means to meet it — our privacy instincts — are no longer fit for purpose From the 1960s onwards, Thomas Gregor, professor of anthropology at Vanderbilt University in Nashville, studied an indigenous Brazilian tribe called the Mehinaku, who lived in oval huts with no internal walls, each housing a family of 10 or 12. Mehinaku villagers were expected to remove themselves altogether from the life of the village at important stages of life, such as adolescence. When a boy hit puberty, he disappeared into the jungle, returning a man. In today’s digital culture, of course, this is precisely the stage at which we make our lives most exposed to the public gaze. Grimmelmann thinks the suggestion that we are voluntarily waving goodbye to privacy is nonsense: ‘The way we think about privacy might change, but the instinct for it runs deep.’ He points out that today’s teenagers retain as fierce a sense of their own private space as previous generations. But it’s much easier to shut the bedroom door than it is to prevent the spread of your texts or photos through an online network. The need for privacy remains, but the means to meet it — our privacy instincts — are no longer fit for purpose. Over time, we will probably get smarter about online sharing. But right now, we’re pretty stupid about it. Perhaps this is because, at some primal level, we don’t really believe in the internet. Humans evolved their instinct for privacy in a world where words and acts disappeared the moment they were spoken or made. Our brains are barely getting used to the idea that our thoughts or actions can be written down or photographed, let alone take on a free-floating, indestructible life of their own. Until we catch up, we’ll continue to overshare. A long-serving New York Times journalist who recently left his post was clearing his desk when he came across an internal memo from 1983 on computer policy. It said that while computers could be used to communicate, they should never be used for indiscreet or potentially embarrassing messages: ‘We have typewriters for that.’ Thirty years later, and the Kremlin’s security agency has concluded that The New York Times IT department was on to something: it recently put in an order for electric typewriters. An agency source told Russia’s Izvestiya newspaper that, following the WikiLeaks and Snowden scandals, and the bugging of the Russian prime minister Dmitry Medvedev at the G20 summit in London, ‘it has been decided to expand the practice of creating paper documents’. Its invention enabled us to capture and store our thoughts and memories but, today, the best thing about paper is that it can be shredded. | Ian Leslie | https://aeon.co//essays/facebooks-privacy-settings-arent-the-problem-ours-are | |
Human reproduction | Many promising male contraceptives are in development, but none have come to market. What’s taking so long? | ‘I am too young to die,’ said my patient. ‘My kids are small. They still need me.’ I explained that having a heart attack did not mean that she would die. We had caught it early. Within the next minutes, she would have an angiogram to remove the blood clot in her coronary artery, and we had already started her on a nitroglycerine drip and an infusion with a blood thinner. But I understood her fear. Women in their early 40s and in good health are not supposed to have heart attacks. She was not overweight, she never smoked, she had no history of high blood pressure or high cholesterol, and nobody in her family had ever had any heart problems. As we wheeled her away, she said that her chest pain was easing up. She was trying to be brave, but I could still see tears in her eyes. Twenty minutes later, the interventional cardiologist and I stared at the angiogram images. My eyes kept flicking back to the electrocardiogram that had prompted our diagnosis. It had indicated the presence of a clot blocking an artery of the heart, but the angiogram showed completely pristine arteries: no clot, no plaque. I gave my patient the good news and saw her smile and utter a prayer. Over the course of the next 24 hours, blood tests confirmed that she’d suffered a heart attack. Perhaps she had an exceedingly rare condition in which transient spasms of coronary arteries can cause a heart attack. Perhaps there had been a clot in her artery but it had dissolved before the angiogram, thanks to the intravenous blood thinner. The blood tests and ultrasound images showed that the damage to the heart was minimal, probably because the blood flow had normalised so quickly. I was happy for her and her family, but I was bothered by a nagging question. Why would a young, healthy woman with normal coronary arteries suffer a major heart attack? She was taking one regular prescription medication: an oral contraceptive. One of its rare but significant side effects is the increased rate of blood-clot formation. The risk varies from one pill to another, but can be twice, threefold or even higher for oral contraceptive users when compared with women who do not use them. The risks are still small: a recent study monitored 1.6 million women in Denmark (ages 15-49) over a 15-year duration, and found that only 3,311 women had a stroke related to a blood-clot formation, and 1,725 women had a heart attack. My patient was using an oral contraceptive for which multiple studies had confirmed an association with blood-clot formation. In clinical practice, it is often very difficult to prove cause and effect. Diagnoses are derived from the recognition of correlations or patterns, and frequently based on educated guesses instead of definitive scientific evidence. In this case, we advised our patient to stop using oral contraceptives. She recovered completely, and has had no recurrence of a blood clot or other major health problems. We will probably never know for sure whether contraceptive pills contributed to her heart attack, but her case serves as an important reminder that these risks are real. The arrival of the birth control pill in the 1960s was hailed as a social revolution that decoupled sexuality from reproduction. It empowered women by giving them true reproductive control, because it allowed for reliable and reversible contraception. Women could delay or prevent reproduction without having to abstain from sex, and they could discontinue usage if they wanted to have a child. Over the years, many additional female contraceptives have been developed so that women today can choose from pills, injections, patches or intrauterine devices — many of which are even more reliable than those of the 1960s. By contrast, the choices for male contraception are far more limited: it’s either sterilisation (a vasectomy) or condoms. Vasectomy has been used since the late 19th century, while the condom has an even longer linage. In the 16th century, the Italian anatomist Gabriello Fallopio described a condom made out of a linen sheath, used to prevent the transmission of syphilis. By the 18th century, condoms were prized as male contraceptives, and were even mentioned by the Italian adventurer Giacomo Casanova, who described them as ‘English Overcoats’. Condoms can prevent the spread of sexually transmitted diseases, but, as a reproductive control strategy, they are not as reliable as their packaging suggests. Unintended pregnancies occur in up to 18 per cent of couples who rely on condoms for contraception. Vasectomies are very effective, with less than a one per cent ‘failure rate’, but they are extremely difficult to reverse. A change of mind means complex microsurgery with uncertain results. In a society that increasingly recognises that men and women should share responsibilities and opportunities equitably, the lack of adequate reproductive control methods for men is striking — and puzzling — especially since many newer methods for male contraception have been developed during the past decades yet none has become available for general use. Newer approaches to male contraception can be divided into two groups: hormonal and non-hormonal. Hormonal male contraceptives act by reducing testosterone levels in the testicles, which drive the production of sperm cells. Most hormonal male contraceptives that have been studied in clinical trials involve the administration of testosterone, either as an injection, implant, oral pill or patch. It might seem counterintuitive to supply extra testosterone in order to suppress sperm-cell production, but this approach works by taking advantage of an internal brake in the male reproductive system. Testosterone production in the testicles is activated by hormones released from the pituitary gland of the brain — the follicle-stimulating hormone (FSH) and the luteinising hormone (LH). Some of this testosterone seeps out from the testicles into the bloodstream and signals to the brain: ‘Stop releasing FSH and LH, there is more than enough testosterone to go around!’ A testosterone-containing contraceptive mimics this function by increasing testosterone levels in the bloodstream, thus activating the shutdown signal. The brain responds by turning off the FSH and LH production; the testicles stop making their own testosterone. Testosterone levels in the testicles start to ebb, eventually dropping below the threshold required for adequate sperm-cell production. The levels of testosterone in the bloodstream supplied by the contraceptive are sufficient to maintain masculine features and male libido, but cannot compensate for the loss of testosterone in the testicles, and thus cannot restore sperm-cell production. Newer-generation male hormonal contraceptives combine testosterone with another class of synthetic hormones called progestins, also used in female contraceptives and extremely effective at activating the FSH/LH shutdown. So far, clinical trials have shown that it takes time — six weeks or longer — until sperm counts drop low enough to stop fertilisation. Short-term studies of the side effects of male contraceptives have not revealed anything major: acne, weight gain, increased libido. Most male contraceptive trials have been small, often recruiting only 10-100 men, and the measured ‘success’ was based on achieving undetectable or minimal sperm counts. Yet the ultimate test of efficacy is not a drop in sperm counts but the prevention of unintended pregnancies in couples who rely on these contraceptives as their primary method of reproductive control. Such efficacy trials require the recruitment of a large number of volunteers and their costly long-term monitoring. Large-scale studies are also needed to ascertain the long-term safety profile of male hormonal contraceptives. ‘None of the big companies will touch hormonal male contraception again’ One of the largest male contraceptive efficacy trials ever conducted was sponsored by the World Health Organisation (WHO) and CONRAD, the US-based reproductive health research organisation. Called Phase II TU/NET-EN, this landmark multicentre study was designed to answer key questions about the long-term safety and efficacy of male hormonal contraception, and enrolled more than 200 couples between 2008 and 2010. The contraceptive used was a long-acting formulation of testosterone (testosterone undecanoate, or TU) combined with a long-acting progestin (norethisterone enanthate or NET-EN), administered via injections every two months. The trial included an initial treatment phase to suppress sperm production, and a subsequent ‘efficacy phase’ that required couples to rely exclusively on this form of birth control for one year. However, in April 2011, the trial was terminated prematurely when the advisory board noticed a higher than expected rate of depression, mood changes and increased sexual desire in the study volunteers. By the trial’s end, only 110 couples had completed the one-year efficacy phase; their efficacy results should be released in the near future. The trial did not include a placebo control group, so the investigators could not determine whether the observed side effects were due to the hormone combination or a side effect of frequent injections. Just like we do not perform ‘placebo’ or sham surgeries on patients, we cannot in good conscience enrol people in a placebo group of a contraception efficacy trial, because most of the couples in the placebo group would end up with an unintended pregnancy. The discontinuation of the WHO/CONRAD trial was a major setback in bringing male contraceptives to the market. It also raised difficult ethical questions about how to evaluate side effects in male contraceptive trials. Since all medications are bound to exhibit some side effects, what side effects should be sufficient to halt a trial? Female contraceptives have been associated with breakthrough bleeding, mood changes, increased risk of blood-clot formation, as well as other side effects. Why should we set a different bar for male contraceptives? The twist here is that female contraceptives prevent unintended pregnancies in the person actually taking the contraceptive. Since a pregnancy can cause some women significant health problems, the risk of contraceptive side effects can be offset by the benefit of avoiding an unintended pregnancy. However, men do not directly experience any of the health risks of pregnancy — their female partners do. Thus it becomes more difficult, ethically, to justify the side effects of hormonal contraceptives in men. What of non-hormonal contraceptives for men? Instead of targeting the hormonal axis that connects the brain and the testicles, non-hormonal contraceptives act directly on the production, activity or movement of sperm cells. One such approach is known as a ‘chemical vasectomy’. Developed by Dr Sujoy Guha in India, RISUG (which stands for ‘reversible inhibition of sperm under guidance’) has already entered Phase III clinical trials. In a standard vasectomy, the vas deferens (the natural transport channels for sperm cells in the testicles) are cut and sealed so that sperm are unable to enter the seminal fluid. In RISUG, a synthetic polymer is injected into the vas deferens with the same effect, but with the dramatic benefit that this polymer can be removed during a further, simple procedure that should restore normal movement of the sperm cells. RISUG is not without caveats. Unlike taking a pill or receiving an injection, it requires a small surgical procedure. And the data on RISUG reversibility is based on animal experiments. We do not yet know whether reversing it in humans would restore male fertility. The clinical trial data obtained in India has been very encouraging so far, both in terms of safety and efficacy. The non-profit Parsemus Foundation has obtained the rights to use and market the RISUG method in the US as Vasalgel, and intends to initiate the first US-based clinical trials this year or next. RISUG is probably unable to generate the kind of profits that would attract the attention of a pharmaceutical company: the synthetic polymer is inexpensive, and a single polymer injection is sufficient to suppress fertility for years. So the only hope for general availability is support from the non-profit or government sector. Other non-hormonal contraceptive methods are currently under investigation in animal studies. Dolores Mruk and Chuen-yan Cheng, scientists at the Population Council in New York, have shown that the chemical Adjudin causes reversible infertility in animals by inducing the release of immature sperm cells. A collaboration between laboratories at Baylor College of Medicine in Houston and the Dana-Farber Cancer Institute in Boston showed that JQ-1, a small molecule that targets the epigenetic enzyme BRDT, was able to reversibly inhibit sperm-cell production and fertility in male mice. But these are a long way from availability. It might take five years or longer for additional safety and efficacy studies in animals before even a small pilot human study can be conducted. If these hormonal and non-hormonal male contraceptives have been developed during the past decades, and some have been shown to be reasonably efficacious in small clinical trials, why are they not yet available for general use? Eberhard Nieschlag, professor at the University of Münster in Germany and a leading researcher in male fertility, recently described the impact of the suspension of the WHO/CONRAD efficacy trial on his field: For male contraceptives to become approved for general use, more safety and efficacy data from larger clinical trials are required. The suspension of the WHO/CONRAD trial should have prompted an investigation into the scientific basis of the side effects, and could have led to a new trial with improved male hormonal contraceptives. But the pharmaceutical industry was not prepared to make that investment.There is a multitude of reasons for the pharmaceutical industry’s reticence when it comes to male contraception. The efficacy and acceptability of female contraceptives sets a high competitive bar. The ethical problem of justifying potential side effects without any direct health benefits for men is another deterrent. And recent controversies related to the health insurance coverage of female contraceptives in the US underscore the even greater uncertainty of who would pay for male contraceptives if they were brought to market. These investments make sense only if there is a large market for male contraceptives. Preliminary surveys seem promising: in 2000, Anna Glasier, professor of obstetrics and gynaecology at the University of Edinburgh, and her collaborators published an international survey of 1,894 women attending family planning clinics in Scotland, South Africa and China. Most women supported the idea of a ‘male pill’, and suggested that their partners would use it. Newer male contraceptives require a very significant shift in the responsibility and burden of contraception between men and women In 2005, a follow-up study by Klaas Heinemann at the Centre for Epidemiology and Health Research in Berlin surveyed more than 9,000 men in Europe, Asia, North America and South America. The willingness of respondents to use newer male contraceptives was highest in Spain (71 per cent), Germany (69 per cent), Mexico (65 per cent), Brazil (63 per cent) and Sweden (58 per cent). Nearly half of the men in the US (49 per cent) and France (47 per cent) expressed an interest. On the other hand, disapproval of newer male contraceptives was highest in Indonesia (34 per cent) and Argentina (42 per cent). These surveys reveal that there is a broad, international willingness among men and women to use male contraceptives, but such an endorsement of a hypothetical ‘male pill’ is a far cry from implementing it. To use newer male contraceptives would require a very significant shift in the responsibility and burden of contraception between men and women. We won’t know how that will work in practice until male contraceptives become widely available. Scientific and cultural challenges might also explain the lacklustre involvement of the pharmaceutical industry. The efficacy data from small clinical trials has shown that there is significant biological heterogeneity in terms of how men respond to hormonal contraception. Suppression of sperm-cell production seems to be far more effective in Asian men, for example, than in Caucasian men. It is very likely that even within each ethnic group, there is a significant variability in the response to contraceptives. Instead of a ‘one pill fits all’ approach, male contraceptives might only be effective if individually tailored. The cultural challenge of introducing male contraception is also formidable. The Catholic Church strongly opposes all forms of contraception and, as the surveys have revealed, there is significant variation in men’s attitudes to male contraceptives. It is thus not surprising that pharmaceutical companies are reluctant to invest millions in large-scale clinical trials. But without such trials, male contraceptives will not receive the regulatory approvals needed to bring them to market. We have reached an impasse. As a society, we recognise the importance of providing options for reproductive control, yet the responsibilities (and side effects) of effective contraception are carried largely by women. Men might never be able to share the physical burden of pregnancy, but they can share the responsibilities of child-rearing and contraception. If the market cannot support this, we need to find an alternative route. Pharmaceutical companies make investment decisions based on profits, while non-profit organisations and government agencies have the luxury of supporting research that leads to equitable sharing — something that does not carry a defined monetary value. Non-profit organisations and government agencies are not in the business of manufacturing pharmaceuticals, but if they could conduct the larger clinical trials required and identify efficacious and safe male contraceptives, then the investment risk will be minimal for any interested pharmaceutical manufacturers, who could capitalise on the research to mass-produce and market the contraceptives. Jump-starting the listless development of new male contraceptives will require a substantial amount of education and support. The Parsemus Foundation, the non-profit behind Vasalgel, and the Male Contraception Information Project, in San Francisco, have made a good start on the challenge by providing information about research on male contraceptives. Politicians need to be lobbied to ensure the adequate funding of government research agencies that specifically pursue the development of male contraceptives. There is also a place for public support: research studies are always looking for male volunteers, and non-profit organisations studying male contraception rely on donations. Male contraception is an excellent example of an opportunity for crowd-funding. There has been a societal failure to produce a contraceptive method for men beyond the condom or the vasectomy, but now we have the chance to rectify that. Who will take the next step? References and links to the research mentioned above are available here. | Jalees Rehman | https://aeon.co//essays/why-is-there-still-no-contraceptive-pill-for-men | |
Ecology and environmental sciences | History tells us that plant diseases cause famines, pestilence and war. Now one is coming for our chocolate | A few degrees north of the equator, in the hot, humid rainforests of Ghana, two groups of farmers are vying for dominance over the world’s most productive chocolate-growing region. Chocolate is made from cocoa beans found in the large, rugby-ball-shaped pods of cocoa trees. People have been planting these trees in Ghana since the late 19th century and the crop is a mainstay of the country’s economy. But the trees have recently proved an inviting target for a wily group of rival agriculturalists, whose practices threaten the long-term survival of cocoa in Ghana. The trouble is, these competitors aren’t humans. They’re ants. Ants have been farming for millions of years longer than humans. These particular ants herd mealybugs — small, sap-sucking insects that look like woodlice dipped in flour. The ants shepherd and protect the mealybugs so they can ‘milk’ the sugary nutritious fluids in their waste. The bugs used to drink primarily from local rainforest trees, but when humans started clearing the forest to make way for cocoa, the ants adapted, by driving their livestock into the fresh cocoa pastures. This strategy shift entangled the cocoa trees in a web of pests and pestilence. When mealybugs drink from trees, they inject them with a pathogen called cacao swollen shoot virus (CSSV). In local rainforest trees, the effects of CSSV are mild, but cocoa — a newcomer to these forests — hasn’t had a chance to evolve countermeasures. As a result, the virus pummels the trees, swelling their shoots and roots well beyond their usual size while draining the colour from their leaves. Before long, often only a few years, the trees die. The trees’ woes don’t end there, for these ants are builders as well as farmers. They strip cocoa pods to build tents for themselves and their mealybugs, protecting them from predators and pesticides. But the pods don’t have to be fresh. The ants are happy to harvest building materials from pods that have blackened with rot, thanks to two funguslike parasites — Phytophthora megakarya and Phytophthora palmivora. As they do so, they move spores from the parasites into uninfected trees, spreading black-pod disease in their wake. A ‘tent’ of soil containing the Phythophthora pathogen arches over a colony of plant-feeding bugs on the cocoa plant.These parasites and viruses make for a dangerous mix. Just as HIV makes humans more susceptible to other infections, CSSV reduces pressure inside a tree, making it easier for Phytophthora to infiltrate its tissues and create a permanent reservoir of infection. Even when the diseased pods are cleared and the ants and bugs are removed, Phytophthora can still trigger fresh bouts of black pod from within the tree. Thanks to ants, the cocoa trees are now vulnerable to parasites from three separate branches on the tree of life. It’s not easy for contemporary scientists to piece together these deadly ecologies. The details of this story are the arcana of natural history, scattered through old or obscure corners of the scientific literature and only fully assembled in the heads of a few knowledgeable scientists. One of those scientists is David Hughes, an ant-loving evolutionary biologist from Pennsylvania State University. I recently met him for lunch at a London pub, where he told me, in a deep Irish brogue, all about ants and parasites. ‘Ants are as wonderful and advanced as us,’ he said. ‘These are two societies fighting over the same plant — two households, both alike in dignity.’ Hughes recently visited Ghana to do fieldwork on ants, and couldn’t help but notice the signatures of this peculiar ant-driven web of contagion. He saw cocoa trees planted messily among rainforest natives, and blackened pods discarded on the humid floor, left to seed the soil with Phytophthora spores. Meanwhile, white mealybugs were sucking from the trees, with trails of ants patrolling up and down alongside them. ‘When you go to a tropical rainforest, you don’t see such things,’ Hughes told me. Hughes is looking for ways to stop this entourage of disease from further ruining the fortunes of Ghana’s human farmers. And his efforts have opened his eyes to an even larger problem: the woeful neglect of plant diseases, by biomedical scientists and funding agencies alike. Laboratories bustle with research on malaria, flu, HIV, Ebola and other pathogens that destroy the human body, but there isn’t much research into those that target the plants we depend upon. If anything, investment is decreasing. ‘Our lab is half the number it used to be,’ said Amy Rossman, who studies plant diseases at the United States Department of Agriculture. ‘I think the same thing is happening worldwide.’ That could be a costly mistake, because plant pathogens are some of the most destructive plagues in the world. And it’s not just cash crops in the crosshairs; plant pathogens can also rot staple foods such as rice, wheat and cassava. ‘We’re set up for catastrophe, and we’re not talking about it,’ Hughes told me. He blames our urbanised culture. With so much time spent away from the natural world, and so many steps between our farms and our plates, we have lost a tangible connection with what we eat. ‘We live in a land of milk and honey. We’re so divorced from our food that we’re not even knowledgeable enough to be scared about the problems in getting it. We’re not thinking about the next AIDS of plants.’ The word ‘parasite’ comes from the Greek for ‘person who eats at someone else’s table’. It’s a fitting etymology, given that we lose 40 per cent of the plants destined for our dinner tables to parasites — including viruses, bacteria, fungi, worms and insects. The fungi alone are capable of catastrophic damage. Writing in Nature last year, the Imperial College epidemiologist Matthew Fisher calculated that if severe fungal epidemics simultaneously struck the five most important crops — rice, maize, wheat, potatoes and soybean — they would leave enough food to feed only 39 per cent of the world’s population. The chance that all five crops would be hit at once is unlikely, but even now these diseases consume enough food to feed 9 per cent of the globe. And the problem is hardly confined to food production; history tells us that when pestilence brings famine, then war and death follow shortly behind. Plant diseases offer all four horsemen rolled into one. Consider the positively apocalyptic effects of Phytophthora infestans, or potato blight, as it’s commonly known. P infestans, whose astonishingly accurate Latin name means ‘infectious plant destroyer’, kills potatoes and tomatoes, while its relatives — there are more than one hundred species of Phytophthora — attack oak trees, rhododendrons, soybeans, cocoa, and more. They look like fungi, grow like fungi, succumb to anti-fungal poisons, and are studied by mycologists, but their DNA reveals them to be close relatives of brown algae and kelp. Mycologists suspect that P infestans originally evolved to attack the wild potatoes of Central America. When Europeans brought the potato to their farmlands in the 16th century, they left the staple crop’s ancient enemy behind. But two centuries later, the boom in international transport allowed it to catch up. The parasite hopped a ship and made landfall in continental Europe, where it wreaked havoc on potato farms. And its work there was just a warm-up act, compared with the devastation it visited on Ireland — a country whose poor were wholly dependent upon potatoes. From 1845 to 1847, the plant destroyer turned potato fields to rotting mush, starving more than a million people. As a result, Ireland’s economy tanked, tensions with England mounted, and the United States, Canada and Australia gained sizeable new populations of Irish migrants. In only a few years time, an alga in fungal clothing changed the fate of the English-speaking world. It stays fixed in this death-grip while a capsule erupts through its head, ready to rain spores down upon other ants passing below It also changed the course of science. When the potatoes started dying, people thought they were reacting to heavy rainfall, or some other change in the environment. They thought the strange mould growing upon them was merely decay setting in on the already dying tubers. It was the English mycologist Miles Joseph Berkeley who, in 1846, first suggested that the mould was the cause of the blight and not its consequence. That seems obvious now, but it was a revolutionary suggestion at a time when most people still believed that microbes were spontaneously generated from inanimate matter. Though ridiculed by his peers, Berkeley’s insight marked the ‘birth of plant pathology’, according to David Cooke, who studies P infestans at the James Hutton Institute in Dundee in Scotland. P infestans is only one of the many plant pathogens that changed the world. Back in the 19th century, Britain’s drink of choice was coffee, and its colony Ceylon (now Sri Lanka) was the world’s greatest coffee producer. That all changed with the arrival of the East African coffee rust fungus, which found a ready-made feast among Ceylon’s dense, back-to-back plantations. The British government sent Harry Marshall Ward, another pioneering plant pathologist, to deal with the problem. He issued a now-familiar warning: planting crops in vast monocultures is an invitation for virulent epidemics. No one listened. Within two decades, the fungus had slashed Ceylon’s coffee production by 95 per cent, forcing the industry to relocate to Indonesia and the Americas. The crippled plantations were replaced by tea bushes, and tea displaced coffee as the quintessential British beverage. If Marshall were around today, he would probably be disappointed that his advice still goes unheeded. We still tie the fortunes of entire regions to single staples. Around 90 per cent of the world’s calories come from just 15 types of crops, most of which are highly inbred monocultures planted over sprawling acreage. These monocultures skew the evolutionary arms race in favour of pathogens, and create the conditions wherein old threats can easily evolve into new virulent strains. Phytophthora infestans or potato blight. Photo courtesy Wikimedia commonsP infestans is also changing. The strain behind the Irish potato famine is long extinct, but existing lineages still cause losses of $5 billion every year. Two of these strains recently met and exchanged their genetic material, creating a new strain called Blue-13 that has rapidly usurped all others in Western Europe. It is extremely aggressive, completely resistant to key fungicides, and can bypass potato defences that have been specifically bred to resist P infestans. ‘It has the perfect collection of traits that allow it to spread and be evolutionarily successful at the cost of the growers,’ said Cooke, who discovered it. Will Blue-13 or something like it cause the next Irish potato famine? No one knows. The only certainty, according to Hughes, is that ‘we can always expect these diseases to get worse’. They will always catch up with escaped hosts, and they will always evolve around resistance. He added: ‘We can never expect them to get better.’ Hughes knows full well what plant diseases can do. ‘As an Irishman, I think a lot about people starving,’ he told me. Hughes grew up in a poor family from Dublin and, as a youth, had a penchant for getting into trouble. After being kicked out of school at the age of 15, he earned a living through odd jobs, from construction to cycle delivery. At one point, he worked on a pony farm, guiding tourists on pony-back into the hills of western Ireland to see the ghosts of the great famine, the rows of empty soil mounds where potatoes had rotted to mush. With so many people dead or emigrated, the land was never reclaimed. ‘You can still see the scars on the landscape 150 years later,’ he said. In his free time, Hughes nursed a deep love of biology by watching David Attenborough documentaries and climbing into abandoned roofs to look for bird nests. ‘We lived in a concrete jungle,’ he told me. ‘We didn’t have nature but inside I was a naturalist.’ That was what drew him back into education. He enrolled at the University of Glasgow and, after another brief dropout (‘I was drinking too much’) became a graduate student at the University of Oxford. At Oxford, Hughes fell in love with parasites. He was especially smitten by Ophiocordyceps unilateralis, the ‘zombie-ant fungus’. Once an ant becomes infected, the fungus not only consumes it from the inside out but also steers its behaviour, propelling it to a zone on the forest floor where the temperature and humidity are just right. Once the ant is in the right spot, the fungus erodes its jaw-opening muscles, forcing it to irreversibly bite down on a leaf. It stays fixed in this death-grip while a capsule erupts through its head, ready to rain spores down upon other ants passing below. Hughes saw parallels with the famine fields of his youth. Just as a quasi-fungus brought Ireland low, a true fungus can cripple an ant civilisation. ‘I became interested in how natural selection designed organisms to break societies,’ he told me. Accidental introductions — whether on a tourist’s shoes, a garden centre’s plants, or a scientist’s samples — are the single biggest cause of new plant diseases That was how he met Harry Evans. A British plant pathologist with shoulder-length hair, a thick moustache and a Liverpudlian accent, Evans is what colleagues cheerfully describe as ‘a character’. He did a lot of the seminal research on zombie-ant fungi; over the course of his career, he has collected samples of infected ants from around the world. Evans joined us in the pub for lunch, and he and Hughes began telling me about their partnership. ‘I wrote to him and we started working together,’ Hughes said. ‘He badgered me!’ Evans corrected. ‘I said I was going to Brazil and he was over the next day.’ The duo formed a quick camaraderie. They notched 102 days in the field together, collecting zombified ants in a world tour that encompassed Brazil, Ghana and Australia. But this was just a side project for Evans. Hughes is the ant guru of the pair, while Evans is more of a globe-trotting botanical fixer. For 40 years, he has worked for CABI (the Centre for Agricultural Bioscience International), a non-profit organisation dedicated to solving agricultural problems. He has travelled round the world, using his extensive knowledge of fungi to save diseased crops, while helping to avert ecological disasters. After travelling with Evans, Hughes became infected by plant pathology, too. ‘Every time we walked out of a tropical forest, we’d walk into a farm and I’d get this masterclass in plant diseases,’ Hughes told me. ‘This happened for two years, and it eventually percolated into my slow-working brain.’ Their time in Ghana was especially instructive. Evans had been studying cocoa there since 1969. He was the one who showed that Phytophthora is carried into cocoa trees by ants, and infiltrates more easily thanks to CSSV. Hughes, with his knowledge of ants and his burgeoning interest in plant plagues, knew where he wanted to work. The story of plant pathology is about connections between countries as well as organisms. It has all the hallmarks of an epic — voyages across oceans, chance meetings, dashing getaways and fateful reunions — and the history of cocoa is a perfect exemplar. The genes of modern cocoa strains tell us that the plant originated in the upper Amazon. Thanks to chemical traces in Mesoamerican pottery fragments, we know that people have been making drinks from fermented cocoa beans as far back as 1900BC. That practice was still going strong millennia later, when European explorers such as the Spanish Conquistador Hernán Cortés arrived in the New World in 1504. Enchanted by chocolate drinks, they soon started cultivating cocoa in their own territories. In the mid-19th century, cocoa journeyed from Brazil to west Africa. It was introduced into Ghana in 1878, by a Ghanaian blacksmith named Tetteh Quarshie who worked on an island plantation off Ghana’s coast. Quarshie established a nursery, and started selling trees to local farmers. The industry was a slow-burner at first, but once it took off, Ghana would never be the same. In 1891, it exported its first, measly 80lb batch of cocoa; two decades later, it was the world’s largest cocoa exporter, shipping out 40,000 tons a year. As cocoa farms intruded into the rainforests, new enemies emerged to greet them. In 1936, cocoa trees started dying of a mysterious affliction, characterised by swollen stems and diseased leaves. It took four years before a young British plant pathologist named Peter Posnette identified the cause: a new virus, later named CSSV. Others soon showed that mealybugs were spreading the disease. Farms fell into neglect, millions of trees became infected, and Ghana’s cocoa industry threatened to collapse. It took serious work from Posnette, but Ghana’s cocoa plantations managed to avoid the fate of Ireland’s potato fields. At his newly formed cocoa research institute, Posnette and a team of researchers crossed existing cocoa varieties with newcomers brought in from the Upper Amazon, producing hybrids that not only grew more vigorously but tolerated CSSV. Over the next five decades, scientists at the institute — now the Cocoa Research Institute of Ghana (CRIG) — continued to develop more resistant and more productive hybrids. Diseases don’t happen in a vacuum. They come from somewhere. They are spread by things. They interact with one other These efforts helped Ghana’s faltering cocoa industry to hold on until 1984, when economic reforms lifted it out of a 20-year funk. ‘They revitalised the cocoa industry in Ghana, which is partly why Ghana is now one of the most stable of African countries,’ said Evans, who joined the CRIG team in 1969 to study black pod disease. ‘Science-based agriculture lifted Ghana to its current position,’ added Hughes. ‘It’s as good a message as any for how good public-funded science can be in bringing countries out of the poverty trap.’ The Phythophthora-infected cocoa pod alongside a healthy (green) pod.But the battle is not won. At best, it has reached a stalemate. CSSV and black pod disease still destroy much of Ghana’s cocoa and the solution, so far, has largely been to just plant more trees. ‘It’s a brute-force approach,’ says Hughes. ‘The cocoa industry has already factored in the losses. And that’s fine for multimillion dollar companies but, for the farmers, it’s the difference between sending a child to school or not.’ Across the ocean, more threats await. Two evocatively named fungal diseases — witches’ broom and frosty pod rot — pose potent threats to South America’s cocoa industry, and are potentially even more dangerous than their African counterparts. If they reach West Africa, they could devastate the industries of Ghana, the Ivory Coast, Nigeria and Cameroon, which together produce two-thirds of the world’s cocoa. ‘If these diseases get into Africa, you can forget cocoa,’ Evans said. Frosty pod and witches’ broom have thus far been contained in South America because international cocoa stocks pass through quarantine centres in Reading in the UK or Montpellier in France. Incoming stocks can be held for two years to check for latent infections. But as potato blight and countless other diseases have shown, pathogens have a way of catching up to their hosts. Accidental introductions — whether on a tourist’s shoes, a garden centre’s plants, or a scientist’s samples — are the single biggest cause of new plant diseases. ‘Quarantines work to some extent but they only delay the inevitable,’ said Mary Catherine Aime, a plant pathologist from Purdue University in Indiana. ‘We just can’t control the way these things move across borders.’ There’s also the chilling possibility of a deliberate introduction, because a plant pathogen is a perfect bioweapon. Unlike human diseases, plant pathogens are easy to handle and safe for people — and they can quickly destabilise nations by destroying staple crops or important exports. There are disturbing precedents for this. In 1989, witches’ broom disease hit the plantations of Brazil’s Bahia region — a last refuge for the country’s struggling cocoa industry. Long-standing estates crumpled, and the region’s production plummeted by 75 per cent. But how did the fragile fungal spores make it through the careful cordon that protected the area? Some suspect foul play. The disease had appeared suddenly, in the heart of the plantations as opposed to their fringes. There were claims that diseased branches were wired to trees. These rumours were apparently confirmed in 2006 when a man who worked for a government agency charged with eradicating witches’ broom actually confessed to deliberately introducing it into Bahia. The goal was to destabilise the local government, which was supported by cocoa barons. If true, the incident marks the first documented account of disease-based agroterrorism. Every time we have a silver bullet, evolution jumps back at us with a response Whether through accident or malice, we cannot count on plant diseases staying where they are. Their expansion is a matter of when, not if. We need to start preparing countermeasures. Education can help. ‘The majority of farmers in Ghana and most parts of Africa do very little or nothing at all to control diseases owing to lack of knowledge,’ wrote Emanuel Moses from Ghana’s Crops Research Institute in a 2010 book on plant pathology. Simple measures can help to control cocoa diseases, like planting trees more sparsely to improve air flow and reduce humidity, planting other crops to restrict the flow of pests, or removing blackened pods to stop ants from spreading spores to new trees. Several programmes exist to train farmers in the use of these techniques. Other scientists are mining the cocoa genome for variants that are resistant to major pathogens. It’s the modern take on Posnette’s approach, but supercharged by the power of modern genetics. Still, resistant strains would only ever be a temporary solution. As P infestans has shown, pathogens will always evolve around resistance, as surely as they skip through quarantines. ‘I think breeding resistant varieties is a slow response to the amount of movement. We have to come up with something better,’ Aime said. Hughes thinks the answer lies in ecology. Diseases don’t happen in a vacuum. They come from somewhere. They are spread by things. They interact with one other. Targeting a single cocoa pathogen is too reductionist — by studying all of them together, Hughes hopes to work out the weak link in their ensemble. Maybe it’s the ants? Working with Ghanaian scientists, Hughes has been experimenting with different ways of disrupting these insect farmers, through the killing of their queens, the use of insecticide or killer fungi, and even craft genetic tools that can switch off ants’ genes. If any of these techniques works, farmers could keep the ants away from cocoa and prevent them from ferrying Phytophthora spores into the trees. Non-farming ants, and there are many around, would soon replace them. Without bodyguards or tents, the mealybugs would be vulnerable to parasites and pesticides, meaning less of them would survive to pump the trees with CSSV. ‘You want to shift the balance towards things that aren’t pathogenic to the plant,’ Hughes explained. ‘It’s a new approach to farming that’ll lead to changes over years. You’ll never have one solution that works in perpetuity. Every time we have a silver bullet, evolution jumps back at us with a response.’ As lunch was drawing to a close, Evans recounted the story of his greatest achievement — controlling rubber vine in Australia. Originally brought in to pretty up the coal mines with its pink flowers, this woody vine ran amok in Queensland. It exuded a poisonous latex, blocked animals from reaching waterways, and smothered eucalyptuses and other native plants. Neither fire nor pesticides could control it — but Evans was able to. He travelled to the vine’s centre of origin in Madagascar and brought back a rust fungus that specifically attacks rubber vine. In 1995, after years of testing, planes sprayed the fungus over the vine’s front lines… and completely halted its invasion. It was a rare success for Australia, a country known for its failed attempts at biological control. ‘The same bloody plant is in Brazil at the moment!’ said Evans, who cannot get funding to control it. ‘If you’re too successful, no one listens to you.’ It doesn’t help that the story is an obscure one. A different scientist might have milked a stream of high-profile publications from that kind of success, but Evans restricted himself to a few papers in backwater journals. He had more important things to do. ‘If I’d been cleverer, I’d have modelled the spread of the vine and got some papers in Nature and Science,’ he said. ‘But that wasn’t my brief. The brief was to control the spread of the invasive.’ Hughes toasted this do-first-publish-later approach, wishing more scientists shared it. ‘People are just interested in their CVs and keeping their labs running,’ he said. ‘We should assess people on whether they controlled problems, not where they publish. Did you control it or not? You did? Oh, well done.’ When ash dieback disease hit British trees in 2012, history repeated itself. ‘There were no taxonomists to identify the fungus,’ Evans said, ‘because we fired them all’ Indeed, scientists with Evans’s skills and mindset — the Yodas of plant pathology — are racing to extinction faster than the crops they study. Admittedly, ‘they’ve made a disastrous job of promoting themselves’, according to Hughes, but sexy modern sciences such as molecular biology have also drawn investment away from more traditional fields. In a recent audit, the British Society for Plant Pathology found that their subject is in free fall, relegated to a few lectures at a smattering of universities. Labs have halved in numbers, most scientists in the field are over 50, and new faces are rare. (The same is true across the pond.) ‘Molecular biology tells us what makes these pathogens tick, which is exciting,’ said Cooke. ‘But if we end up with a cadre of trained molecular biologists who can’t identify an oak tree, you have a problem.’ Hughes sees a deeper tragedy at play — the loss of a patient, contemplative approach to British natural history that allowed Charles Darwin and Alfred Russel Wallace to envision the theory of evolution by natural selection. ‘People like Harry [Evans] have spent 40 to 50 years working on groups of organisms, and know them deeply in the same way that Darwin or Wallace did,’ Hughes said. ‘We’re not replacing them, and that’s a lamentable shame.’ As the old guard retires sans apprentices, we lose the knowledge in their heads and we cripple our intellectual immune system. When Phytophthora ramorum started killing oak trees in the western US in the mid-1990s, it took a long time before anyone knew what it was, giving the disease a chance to establish a foothold. When ash dieback disease hit British trees in 2012, history repeated itself. ‘There were no taxonomists to identify the fungus,’ Evans said, ‘because we fired them all.’ The knowledge that remains is hardly accessible. ‘I just tweeted a CABI paper from 35 years ago on cocoa disease, and it costs $35 to access it,’ Hughes told me. And some of the field’s secrets, like Evans’s discovery that P megakarya can bounce back from inside cocoa trees, aren’t even in the literature — they’re noted in printed documents, hidden within the archives of organisations such as CABI or CRIG. ‘The world gets a sentinel system about spreading diseases,’ he explained. ‘We want to link everyone in the world growing food’ Hughes wants to democratise that hidden knowledge to give farmers a way of helping themselves. ‘Farming has always been a community affair but, in the modern era, we’ve lost those connections and knowledge is held by a few,’ he said. To rebuild these links, he teamed up with his Penn State colleague Marcel Salathé, a computer scientist who studies the spread of behaviour through social networks. Earlier this year, the duo launched plantvillage.com, an open-access website where people can ask each other for help with agricultural problems. Users vote the answers up and down, and accumulate points depending on how helpful they are. It’s like Quora for gardeners. ‘We’ll never invest in people like Harry again,’ Hughes said. ‘The second-best solution is to rely on the crowd.’ Many start-ups with world-changing ambitions have died on the vine, but five months in and the site is gently blooming. Every question that’s been posed, from requests for gardening tips to gripes about pests, has been answered by the small but active community. Admittedly, they are largely middle-class Americans and Europeans — a far cry from Ghanaian cocoa farmers. But Hughes is realistic. He is trying to build a thriving network first, so ‘we can be ready for three years’ time when Africa is mobile-ready’. In his vision, people could snap photos of rotting leaves or blistering fruits, and receive diagnoses and tips from farmers and academics around the world. This has already started happening. ‘They get knowledge, and the world gets a sentinel system about spreading diseases,’ he explained. ‘We want to link everyone in the world growing food.’ We might never stop plant pathogens from criss-crossing the world, but our own webs of communication might be able to spot their advances early. We might never be able to replace the natural historians of the past, but we might be able to mobilise the village to compensate for the loss of its elders. It’s an approach that could have been lifted from an ant’s playbook. Individual ants are hardly great strategists, but through their interactions, they can achieve incredible feats of swarm intelligence. Some successfully rear bugs, and build tents to defend them from threats. Others grow a delectable fungus by feeding it chopped up leaves, while killing off other moulds with antibiotic-secreting bacteria. For millions of years, ants have raised crops, herded livestock and weeded their gardens, all by working together as a large connected society. Humans could learn a thing or two from that approach. | Ed Yong | https://aeon.co//essays/an-army-of-ants-is-besieging-the-world-s-chocolate-supply | |
Childhood and adolescence | How can scientists act ethically when they are studying the victims of a human tragedy, such as the Romanian orphans? | We drove to the orphanage on a pleasant December morning, under a sky that seemed too blue. It was a short ride through a residential neighbourhood of Bucharest, littered with posters of politicians’ heads for the upcoming elections. Nervous, I mentally recited the two rules the American professor had given me the night before: no picking up the kids, and no crying in front of them. We pulled up to a dingy pink building, lined on all sides by tall wire fencing, and parked at the curb. After passing through the checkpoint of a stoic security guard, we stepped into an empty hallway. It was cleaner than I had expected; old plaster walls and chipped steps, yes, but no obvious filth. There was an overpowering smell of institutional food, like burned meatloaf. Over the next hour or so, the manager of the place — a short and affable 24-year-old guy — gave us a tour. He didn’t speak much English, but Florin Ţibu, a Romanian who works with the professor, translated for us. About 50 children and teenagers lived there, boys and girls ranging in age from about six to 18, and I saw just six adults: our tour guide, three female caregivers, and two cleaning ladies in white coats. The children weren’t in school because of the big holiday: Romania’s National Day, a celebration of the country’s unification in 1918. Perhaps a typical day wouldn’t have been so chaotic. Then again, Ţibu said, the kids always flock to new visitors. And flock they did. A boy in a red T-shirt and sweats skipped up to me, grabbed my hand, and wouldn’t let go. His head didn’t reach my shoulders, so I figured he was eight or nine years old. He was 13, Ţibu said. The boy kept looking up at me with an open, sweet face, but I found it difficult to return his gaze. Like most of the other kids, he had crossed eyes — strabismus, the professor would explain later, a common symptom of children raised in institutions, possibly because as infants they had nothing to focus their eyes on. A couple of dozen kids gathered around us in a tight circle, chirping and giggling loudly as children do. At one point they broke into a laughing fit, and I asked Ţibu what happened. They were gawking at the whiteness of my teeth, he said. Two of the girls, somewhere in that gaggle, were pregnant. We saw the kids’ bedrooms. Each had half a dozen mattresses lying on the floor and one television set. All the TVs were blaring old cartoons, some of the same ones I remember watching in my own childhood 25 years ago. Kid after kid dragged me proudly to see their room. Once, we walked in on a cleaning lady frantically sweeping, embarrassed by the cigarette butts, grey dirt, and insect carcasses all over the floor. One of the rooms held three or four older boys, still sleeping. They were heroin addicts, I would learn, and sometimes shot up in front of the younger children. After about half an hour of holding the sweet boy’s hand, I suddenly, urgently, needed to let go. I wriggled my fingers free, only to have him clutch them again. St Catherine’s was once the largest orphanage in Bucharest. Photo courtesy Dr Charles NelsonWe Americans drove back to St Catherine’s, a sprawling complex of late 19th-century stone buildings and desolate courtyards that was once the largest orphanage in Bucharest. Today, it’s mostly office space, with rooms along one long hallway occupied by the professor’s team. We sat in one of them to talk about the morning visit. The professor is Charles Nelson, a neuroscientist from Harvard University who studies early brain development. In 1999, he and several other American scientists launched the Bucharest Early Intervention Project, a now-famous study of Romanian children who were mostly ‘social orphans’, meaning that their biological parents had given them over to the state’s care. At the time, despite an international outcry over Romania’s orphan problem, many Romanian officials staunchly believed that the behavioural problems of institutionalised children were innate — the reason their parents had left them there, rather than the result of institutional life. And because of these inherent deficiencies, the children would fare better in orphanages than families. The scientists pitched their study as a way to find out for sure. They enrolled 136 institutionalised children, placed half of them in foster care, and tracked the physical, psychological, and neurological development of both groups for many years. They found, predictably, that kids are much better off in foster care than in orphanages. Nelson has visited Bucharest 30 to 40 times since his first trip in 1999. Some things have changed: in 2007, Romania joined the European Union (EU). It has greatly expanded its state-funded foster-care system. The number of children in institutions — or ‘placement centres’, to use the preferred bureaucratic euphemism — has dropped dramatically. But other things haven’t changed. Romania is still a post-communist country suffering from high levels of poverty and corruption. It still has a weak medical and scientific infrastructure. It still has some 9,000 children — more than half of all the children in its child protection system — living in orphanages, like the boy who took my hand. Nelson had warned me several times about the emotional toll of meeting these children. So I was surprised, during our debrief, to hear him say that our visit had upset him. Turns out it was the first time that he had been to an orphanage with older teenagers, not all that much younger than his own son. ‘I’m used to being really distressed when I see all the little babies, or the three- and four-year-olds,’ he said. ‘But here, I almost had to leave at one point, to get myself some air. Just the thought of these kids living like this, it was really depressing.’ How does he do this? I wondered. Nelson never expected to be an advocate for orphans, or for anybody really: he’s a neuroscientist. In 1986, he launched his first laboratory at the University of Minnesota, which specialised in using electroencephalography (EEG) — a harmless technique for measuring brain waves via a soft skullcap of electrodes — on babies and toddlers. His field of developmental neuroscience got a surge of attention in April 1997, when Bill and Hillary Clinton put on a one-day meeting of researchers called ‘The White House Conference on Early Childhood Development and Learning: What New Research on the Brain Tells Us About Our Youngest Children’. The First Lady gave the gist of the meeting in her opening remarks: the first three years of life, she said, ‘can determine whether children will grow up to be peaceful or violent citizens, focused or undisciplined workers, attentive or detached parents themselves’. The conference was covered widely in the media. In the wake of all the hoopla, the Chicago-based John D and Catherine T MacArthur Foundation asked Nelson to lead a small group of scientists to dive more deeply into these topics. The resulting Research Network on Early Experience and Brain Development, fully launched in 1998, included 12 researchers who shared a plush budget of about $1.3 million a year. Nelson held the purse strings. The Network’s first studies used animals: baby mice that were either frequently or infrequently handled by their human caretakers; baby barn owls whose brain wiring changed dramatically after wearing prisms over their eyes; and most striking, baby rhesus macaque monkeys that had been separated from their mothers. Researchers had isolated monkeys before. In the 1960s, the American psychologist Harry Harlow famously reared baby monkeys in complete isolation for up to two years. The animals showed severe and permanent social deficits, bolstering the then-controversial idea that the maternal-child bond is crucial to healthy development. The Network scientists wanted to know whether the timing of the maternal separation made any difference. Monkeys typically become independent around six months old. The Network studies found that when monkeys are separated very early, at just a week old, they develop severe symptoms of social withdrawal, just as Harlow had observed: rocking back and forth, hitting and biting themselves, and running away from any approaching monkey. In contrast, when the babies are separated at one month old, they show inappropriate attachment, grabbing hold of any nearby monkey. ‘We concluded from this that the four-week animal had an attachment with mom and then had that attachment ripped away,’ Nelson says ‘The one-week animal never formed an attachment, so it didn’t know how to relate socially.’ Children were getting adequate food, hygiene and medical care, but had woefully few interactions with adults, leading to severe behavioural and emotional problems As the monkey data rolled in, Nelson was hearing about human social deprivation from his Minnesota colleague Dana Johnson, a neonatologist who had long worked on international adoptions. Johnson treated orphans from all over the world, but was most disturbed by those from Romania. Nelson invited Johnson to talk at a Network meeting in January of 1998. In a conference room of the Claremont Hotel in Oakland, California, Johnson switched off the lights and played the Network scientists a few disturbing movies of children in Romanian orphanages. Some kids were rocking and flailing and socially withdrawn; others were clingy. ‘We were all very teary-eyed,’ Nelson recalls. Dr Charles ‘Chuck’ Nelson photographed outside the St Catherine’s orphanage in Romania. Photo courtesy Dr Charles NelsonImmediately following Johnson’s presentation, Judy Cameron, leader of the monkey project, gave the group an update of her findings. ‘She starts showing videos of these monkeys, and they look just like the videos of Dana’s kids,’ Nelson told me. ‘It really freaked us all out.’ Romania has had orphanages for centuries. But its orphan crisis began in 1965, when the communist Nicolae Ceaușescu took over as the country’s leader. Over the course of his 24-year rule, Ceaușescu deliberately cultivated the orphan population in hopes of creating loyalty to — and dependency on — the state. In 1966, he made abortion illegal for the vast majority of women. He later imposed taxes on families with fewer than five children and even sent out medically trained government agents — ‘The Menstrual Police’ — to examine women who weren’t producing their quota. But Ceaușescu’s draconian economic policies meant that most families were too poor to support multiple children. So, without other options, thousands of parents left their babies in government-run orphanages. By Christmas day in 1989, when revolutionaries executed Ceaușescu and his wife by firing squad, an estimated 170,000 children were living in more than 700 state orphanages. As the regime crumbled, journalists and humanitarians swept in. In most institutions, children were getting adequate food, hygiene and medical care, but had woefully few interactions with adults, leading to severe behavioural and emotional problems. A handful of orphanages were utterly abhorrent, depriving children of their basic needs. Soon photos of dirty, handicapped orphans lying in their own excrement were showing up in newspapers across the world. ‘I was very taken with the kids in orphanages,’ Johnson says. Their condition ‘was a stunning contrast to most of the kids we were seeing come for international adoption who had been raised in foster homes’. In his presentation, Johnson had mentioned that the head of Romania’s newly formed Department for Child Protection, Cristian Tabacaru, was keen on closing down his country’s institutions. After seeing the movies, Network scientist Charles Zeanah, a child psychiatrist from Tulane University who specialised in infant-parent relationships, was gung-ho about meeting Tabacaru and setting up a humanitarian project. Nelson was touched by the videos, too. And he couldn’t help but think of the scientific possibilities of studying these children. ‘The animal model could allow us to dig into brain biology and all of that but, at the same time, we’d be running a parallel human study.’ Eleven months after that emotional hotel meeting, Zeanah and his wife, a nurse and clinical psychologist, travelled to Romania and saw the orphans for themselves. During their first orphanage visit, the couple couldn’t help but start bawling in front of the kids. One child reached out to comfort them, saying: ‘It’s OK, it’s OK’. The Zeanahs also met with Tabacaru. He was eager to work with the MacArthur group because he thought that a rigorous scientific study could help his cause. ‘If there was scientific evidence to support the idea that foster care was better for kids, he thought he’d have more leverage with his political colleagues,’ Nelson told me. The data, in other words, could speak for the children. Two days before our visit to the orphanage, I accompanied Nelson to a homely green building that houses the psychology department of the University of Bucharest, where he holds an honorary doctorate. He had been invited by the Dean to give a talk on the ethics of human research. All reputable scientific institutions follow a few ethical principles to guide their human experiments: participants must give informed and unambiguous consent; researchers must thoroughly consider possible risks and benefits; the gains and burdens of research must be equally distributed to participants and society at large. These rules are largely unheard of in Romania, let alone enforced. In a packed auditorium, Nelson began his lecture by describing the fundamental moral dilemma facing all clinical studies. ‘The real goal of research is to generate useful knowledge about health and illness, not necessarily to benefit those who participate in the research,’ he said. That means, he added, that participants are at risk of being exploited. Nelson outlined the sad history of human rights violations done in the name of science. There was Josef Mengele, the Nazi physician who performed medical experiments — radiating, sterilising, infecting, and freezing identical twins, among other atrocities — on Auschwitz prisoners. Mengele escaped capture after the war, but 20 other Nazi doctors were tried in a US military tribunal in Nuremberg, Germany. The judges at these trials created a list of 10 ethical tenets for human research, known as the Nuremberg Code. These included voluntary consent, avoidance of suffering, and the right of the subject to end the experiment at any time. The Nuremberg Code provided the intellectual basis for the 1964 Declaration of Helsinki, the first ethics text created by the medical community and one that’s still updated frequently. It’s not legally binding, but thousands of research institutions use the declaration to guide their formal regulations and ethical review committees. Could there be a more vulnerable study population, after all, than orphans with physical and psychological disabilities living in an economically feeble and politically unstable country Today the importance of these rules is obvious, but it was decades before they were systematically enforced — and many ethically dubious experiments happened in the interim. In the 1950s and ‘60s, for example, researchers from New York University fed mixtures of fecal matter infected with hepatitis to mentally retarded children living at the Willowbrook State School in Staten Island. The researchers’ intent, as they would publish in the prestigious New England Journal of Medicine in 1958 and ‘59, was to track the course of the disease and the effect of new antibody treatments. (The researchers argued that since hepatitis was rampant in the institution anyway, they weren’t exposing the children to any additional harm.) Meanwhile, 1,000 miles south, other researchers were testing the natural history of untreated syphilis on hundreds of poor black men in Tuskegee, Alabama. The men were not only denied treatment for the disease, but had no idea they were sick. In 1972, the study’s 40th year, a whistle-blower scientist finally told the press about the effort, which had been sanctioned by the US Centers for Disease Control and Prevention. The Tuskegee syphilis experiment triggered a public uproar and a US Congressional investigation that ultimately shut down the research. ‘It became the standard bearer of unethical research,’ Nelson told the room of Romanian students. These were the ugly precedents confronting Nelson and his colleagues in 1999, when they began discussions of how to set up the early intervention project in Bucharest. They knew from the outset that the project would be ethically precarious: could there be a more vulnerable study population, after all, than orphans with physical and psychological disabilities living in an economically feeble and politically unstable country? As the bioethicist Stuart Rennie later wrote of the Romanian orphans: ‘Researchers who choose them as study participants — in this age of intensified ethical scrutiny — would seem to have a career death-wish.’ The MacArthur Network scientists spent the better part of a year hammering out the ethical parameters and experimental design of the project. They wanted to use the gold standard of clinical research design: a randomised controlled trial. This would allow them to objectively compare children given one intervention (foster care) with those given another (institutional care). For most randomised controlled trials, if one intervention proves to be better toward the beginning of the trial, researchers will call off the study and put all participants on that treatment. But that wouldn’t be an option for this study. The problem was that, with a few exceptions, foster care didn’t exist in Romania. That meant the scientists would have to create their own system, leading to a slew of sticky complications. How would they choose families and adequately train them? What was appropriate payment? What if a particular foster-care family was abusive, or otherwise didn’t work out? Was it fair to leave half of the children languishing in orphanages? What would happen to the children if the study (and its funding) ended? The team answered these questions with the help of non-governmental organisations (NGOs) in Romania that specialised in orphan care. They would recruit foster families through newspaper advertisements and put them through a rigorous training programme for parenting skills. They would pay the families well — 250 Romanian Lei per month (about $96 at the time), which was almost twice the minimum wage in Romania. And after that initial placement, the Department for Child Protection would be in charge of the children’s whereabouts, just as they were before. So, for example, if a biological mother came forward and wanted her child back, the department could opt to ‘reintegrate’ the child. Or if more foster homes were to become available, then the department could move children from the institutions into families. And if the project were to stop for any reason, the Romanian government had agreed to take over the payment of the foster-care families. Three of the MacArthur scientists — Nelson, Zeanah, and the psychologist Nathan Fox of the University of Maryland — stepped up as leaders of the project. After getting approval from ethics committees at each of their universities, the study launched a feasibility phase in November of 2000, and officially began collecting data in April 2001. The plan was to end the study after 42 months. The researchers set up a satellite lab in St Catherine’s, which at the time was still operating as a placement centre for about 500 children. The researchers hired half a dozen Romanians to follow the participants’ personal cases and collect data — on physical growth, IQ, psychological development, and later, EEG and brain scans — every few months. These Romanian researchers, many of whom are still part of the project, were intimately familiar with their country’s orphan problem. Take Anca Radulescu, who is now the project’s manager in Bucharest and the team’s mother hen. Radulescu was born in 1968, two years after Ceaușescu’s abortion ban, Decree 770. People born around this time are known as Decreţei: children of the decree. As a young girl, Radulescu remembers, her mother told her that despite her birth year, she was not one of the Decreţei — she was wanted. In 1997 Radulescu had begun working at St Catherine’s as a psychologist. She was hired thanks to new legislation — passed because Romania was trying to get into the EU — that moved the administration of orphans from the Ministry of Health to the newly created Department for Child Protection. The law marked the beginning of a philosophical change in the government’s treatment of orphans, with a new focus on nurture over nature. The transition was ‘a nightmare’, Radulescu says, because of the doctors who managed the institutions. They were dismissive of psychology and social work, both of which had been banned during Ceaușescu’s reign, and believed that the orphans’ problems were medical. They gave the children a physical exam nearly every day, and prescribed them sedatives at night. Meanwhile, the children weren’t getting the social interaction they desperately needed. They lived in units of 40 or 50 kids, each with about six adult caretakers who were kept busy preparing food and doing laundry. The kids were left in big rooms to play by themselves. In late 2000, Radulescu started working for the brand-new Bucharest Early Intervention Project, in offices just a few corridors away from those residential units. She and the rest of the team, working closely with Romanian NGOs, used newspaper advertisements to find foster-care families and never-institutionalised children (who would serve as community controls). The team screened 187 orphans from six Bucharest institutions, eventually choosing 136 who did not have major medical problems. The children ranged from six to 31 months old. They were randomly assigned to either foster care or the orphanage, with siblings kept together. In the end, 69 children went into foster care and 67 stayed in institutions. For its first couple of years, the Bucharest project rolled along smoothly and quietly. This was remarkable given the constant political turnovers (including one in which Tabacaru, the researchers’ government ally, was booted out). Then, in June of 2002, a crisis. The Bucharest lab had an unannounced, and unwelcome, visitor: Baroness Emma Nicholson. The children who grew up in institutions have less white matter, the tissue that links up different brain regions, compared with those in foster care Nicholson, hailing from the village of Winterbourne in England, was a member of the European Parliament and had been appointed to represent Romania’s application into the EU. This made her a powerful figure in Romania, which had been trying to join the EU since 1993. She also happened to be an outspoken opponent of international adoptions, which she felt were avenues for child trafficking. Thanks to her influence, in 2001 Romania placed a moratorium on international adoptions. After Nicholson’s visit to the lab, she was quoted in several Romanian newspapers making damning accusations against the Bucharest project. ‘She goes to the press and says that we’re doing a study, using high-tech American measures, to identify the smartest orphans so we can sell them on the black market,’ Nelson told me one night, practically sputtering. Nicholson would deny that she ever accused the scientists of trafficking, but she continued to describe the MacArthur project as illegal and unethical. Although the claims were patently false, and no formal charges were ever made, the story was quickly picked up by international newspapers, including Le Monde and The New York Times. The scandal died down quickly after Nelson called Michael Guest, then US ambassador to Romania, who ran interference with the Romanian government. But the team learnt an important lesson about their public profile. ‘We never, ever took a position on international adoptions — it would have been suicide,’ Nelson said. ‘We took the model of, look, we’re scientists. Our job is to collect the data and give it to others who know how to do policy, not to take sides on an issue.’ The BEIP project stayed under the radar until 8 June 2004, when Nelson’s team held a press conference to announce some exciting data. In the Hilton Hotel in Bucharest, with representatives from several Romanian ministries and the US ambassador in attendance, the researchers reported that, as expected, the 136 children who started in institutions tended to have diminished growth and intellectual ability compared with controls who had never lived outside of a family. But there was a surprising silver lining. Children who had been placed in foster care before the age of two years showed significant gains in IQ, motor skills, and psychological development compared with those who stayed in the orphanages. The scientists published these findings in 2007, in the prestigious journal Science. That paper is the most famous to come out of project, but it’s just one of nearly 60. Others have shown, for example, that toddlers who never left institutions have more repetitive behaviours than those who went into foster care. Long-institutionalised toddlers also show different EEG brainwave patterns when looking at emotional faces. As the children got older, the researchers gave them brain scans (renting out time with a private clinic’s MRI machine, one of only a handful in the country). These scans showed that, at around the age of eight, the children who grew up in institutions have less white matter, the tissue that links up different brain regions, compared with those in foster care. The researchers looked at the children’s genomes, too, and found that those who lived the longest in orphanages tend to have the shortest telomeres, the caps on the end of chromosomes that are related to lifespan. The project is now funded not only by the MacArthur Foundation, but by grants from the US National Institutes of Health (NIH) and the Binder Family Foundation. After 14 years, the Bucharest project is well-known and well-respected in the scientific community. At first, though, many scientists had concerns about its ethical design. For example, when the researchers first submitted their data to Science, the journal’s editor didn’t know what to make of its ethics. So she sent it to bioethicists at the NIH for a thorough review. ‘Even if you study ethics all the time, it turns out this is a very interesting ethical case,’ said Joseph Millum, one of the NIH bioethicists who reviewed it. As Millum and his colleague Ezekiel Emanuel would explain in a commentary published in the same issue of Science, they did not find the work to be exploitative or unethical. The Bucharest project study differs from most randomised control trials done in disadvantaged countries, Millum explained. Those tend to be studies of a new treatment — an antiretroviral drug to treat HIV in Africans, say. It’s ethical to put people through those trials because the researchers don’t know from the outset whether the drug will work. ‘The hope is that the new knowledge you get out of the study is then going to be useful in informing practice,’ Millum said. However, in the Bucharest project’s case, the researchers already knew from a multitude of studies in Western countries that foster care is better for children than institutionalised care — that’s why Western countries have so few institutions. So although the study could potentially answer lots of new, open questions, the one that justified its existence had already been answered. Still, that older research had not influenced Romanian social policy; many government officials did not trust the idea of foster parents, and believed that institutions provided adequate care. That, plus the fact that the project had close connections with the government, lent credence to the argument that the study could change policy, Millum explained. ‘The answers to the question the study asked could have changed practice.’ But realistically, how likely was it that the study would change anything? And once the study was over, were the scientists supposed to then become advocates for those policy changes? It’s complicated, Millum said. ‘People have different views about whether there is an obligation to provide successful interventions after a study is complete.’ If a medical study is taking place in Western Europe, for example, where there is a relatively robust health care system, then those health institutions will probably be the ones integrating the new data into policy, he says. But in an African country, for example, with no health care to speak of, the researchers might share more of the burden. These are not easy waters to navigate. And there are limits, of course, to what even the most motivated scientists can do. ‘The idea that there is a single experiment that leads to a breakthrough, and then we solve the problem is, sadly, naive,’ Millum said. ‘They can’t control what happens in Romania.’ In late May this year, exactly six months after my Bucharest trip, I had lunch with Nelson in Boston to catch up. I asked him whether he thought Romania’s orphan situation had changed much since he first learnt about it 14 years ago. After all, I pointed out, some of Romania’s most destructive policies regarding orphans are still in place. The international adoption moratorium was made permanent in 2005. Domestic adoption exists, but comes with onerous regulations. A taxi driver in Bucharest told me a story about friends of his, native Romanians, who have been trying to adopt a Romanian orphan for years. The regulations seem ridiculous; for example, children can’t be adopted until the state has attempted to make contact with all of their fourth-degree relatives. ‘There are two things that have changed,’ Nelson said: one good and one bad. The good: Romania has seen a significant drop in the rate of child abandonment and institutionalisation. On 23 June 2004, 15 days after the Bucharest project’s first big press conference, Romania passed Law 272/2004, stating that children younger than two are not allowed to be placed into residential facilities. The law has loopholes — children with severe handicaps can still be institutionalised, and young babies can still be left in maternity hospitals for their first few years — but it signifies a major change in attitude, and seems to have reduced the overall number of institutionalised children. It’s impossible to know how much credit the Bucharest project deserves for that law. The project was by then well known among Romanian officials. But there was another powerful force at work: Romania badly wanted to get into the EU, and the EU (thanks in large part to Baroness Nicholson) had demanded that Romania deal with its orphan problem. The Bucharest project data and the EU pressure were ‘like a perfect storm’, Nelson said. The more depressing change that Nelson has noticed since 1999 is the global recession, which hit eastern Europe hard. A study last year by the European Commission found that Romania still has the continent’s highest rate of babies abandoned in maternity hospitals per year, at 8.6 per 1,000 live births. ‘Since the recession hit, Romania has cut back on foster care,’ Nelson said, ‘and parents with kids in foster care are putting the kids back in institutions.’ For all that they hope to change in Romania’s social policy, the researchers are more immediately concerned with the children in their study. These kids have known some of these researchers for as long as they can remember. Relationships have formed. Of the original 136 children the researchers recruited from institutions, 62 are now living with foster or adopted families, 31 were reintegrated with their biological parents, and just 17 are still living in institutions (of the rest, 10 live in ‘social apartments’, which are similar to group homes, and 16 dropped out of the study). All evidence suggests that these kids are no worse off today than they would have been had the study never existed. But that doesn’t mean they’re doing well. In the Bucharest lab, I met a 12-year-old project participant named Simona and her biological mother. Simona was the youngest of four children; when she was eight months old, her mother could no longer afford to keep her. So she dropped her off at St Catherine’s, where her older sister, an epileptic, had already been living for several years. Simona’s mother told me how difficult it was to give up her babies. She visited them every week, and was sad to see that they were often sick with a cold or a rash. When Simona was five years old, her mother was receiving enough financial assistance from the government to bring her back home. But those years in the institution took a toll: Simona has a sweet disposition, like her mother, but she’s very thin, and her IQ is 70. I next met 13-year-old Raluca, a strikingly pretty girl who went into foster care at 21 months old and has lived with the same family ever since. Raluca is stylish and intellectually sharp; her big eyes, unlike Simona’s, made frequent contact with mine. At first, I thought of Raluca as one of the lucky ones; she escaped the hell of the orphanage. But she has different problems. She’s defiant to her teachers and parents, and has started smoking and seeing older boys. Her foster mother has threatened to give her up. These two girls are doing relatively well. The Bucharest project’s staff is dealing with a handful of participants in more dire situations. While sitting in on a lab meeting, I heard a few examples: a girl who at age 10 was sexually attacked by her neighbour; a Roma girl who, at age 12, was returned to an orphanage because her foster-care mother said she was stealing, lying, and ‘had a gipsy smell’; another 12-year-old girl who was reintegrated with her grandparents and then, with their blessing, married a 12-year-old boy. The scientists worry that these sorts of horror stories will become more common as the children ride the rollercoaster of adolescence. And then there are the 17 participants who still live in orphanages. They’re slightly better off than the average institutionalised child, in that they get regular medical assessments and constant check-ins from the researchers. After doing a brain scan of one boy, for example, the researchers discovered a nasty infection hidden in the space behind his ear. These mastoid infections can be fatal, but the boy was fine after a round of antibiotics. ‘Well, you couldn’t do what I do if you got upset all the time.’ Still, institutional life is undeniably miserable. During my visit to the orphanage, I chatted with a 14-year-old Bucharest project participant named Maria. Maria was abandoned at birth and spent her first four months in two different maternity hospitals. She’s been in orphanages ever since, moving every few years. She has a normal IQ, which means she’s far more resilient than others with her history. She was shy when we talked, and didn’t make much eye contact, but otherwise seemed like a normal girl. I asked Maria what she thought was the worst thing about living in the placement centre. She said it was the older boys who take drugs. And what about the best thing? I asked. She paused for about half a minute, looking down at her purple Crocs. The times we get to leave for a little while, when we can take the bus to the park, she said. When Nelson’s team first set up the Bucharest Early Intervention Project, the MacArthur Foundation gave them a separate pot of money to create a humanitarian institute in Bucharest. The goal of the so-called Institute of Child Development was to work with local officials and non-profits on orphan issues, as well as to train a new generation of Romanian researchers. The institute has put on several scientific and policy workshops, inviting hundreds of researchers from across the world. The last of these will probably take place in November. Then in December 2013, the MacArthur funding runs out, and it’s unclear whether the institute will continue under local direction. ‘We’re limited in our resources,’ says Elizabeth Furtado, who has been the Bucharest project’s manager since 2006 and visits the Bucharest lab about twice a year. Furtado has a four-year-old son. She copes with the job by compartmentalising; for example, she has very intentionally not read the full life histories of any of the participants. But sometimes the pain is unavoidable. She was with Nelson and me the day we visited the orphanage — her first time in an institution since becoming a mother. ‘It took me almost a month after coming back to get to a [point] where I could kind of let it go and focus on my relationship with my son,’ she told me. The last two years on the project have been somewhat defeating, Furtado says, because the adolescents’ behaviours are becoming more difficult to manage, and the foster-care parents are getting less and less support — financial, educational, emotional — from the government. ‘On the one hand, I know that we are doing a lot of good for a lot of these kids,’ she says. ‘But it makes me sad that legislation isn’t keeping up with enough of what we’re finding.’ Nelson, too, has felt his share of emotional tension over this project, though he tends to downplay it (he often refers to being sad, for example, as having an ‘activated amygdala’). Like the Zeanahs, he wept on his first visit to St Catherine’s, in 1999, where he saw a room full of babies lying in cribs and staring at white ceilings while their adult caretakers chatted and smoked cigarettes in the corner. He remembers staring out the plane window for most of the long flight back. When he arrived at his house, he rushed to hug his bewildered teenage son. ‘You just feel grateful,’ he told me at our recent lunch. Over time, though, Nelson has become desensitised, holding on to the idea that scientific data will eventually pave the road for better social policy. This June, the researchers learnt that a grant they received from the NIH was renewed for another five years (a coup given recent cuts to the US federal budget). With that money, the team can track the children’s brain structures, cognitive skills, and emotional maturity from age 12 to 16 — a period that, despite major brain reorganisation, doesn’t get much attention in policy circles. And Nelson is in the process of setting up similar child intervention projects in other parts of the world, including Brazil and Chile. Nelson’s last trip to Bucharest was in April. Soon after he got home to Boston, his mother came to visit. She asked him how he could go over there all the time without being constantly upset. He told her, ‘Well, you couldn’t do what I do if you got upset all the time.’ But how do you avoid it? I pressed. ‘You just sort of learn to deal with it,’ he said. ‘You put on your scientist hat and detach.’ Some names have been changed to protect the identities of those mentioned in the article. | Virginia Hughes | https://aeon.co//essays/romanian-orphans-a-human-tragedy-a-scientific-opportunity | |
Stories and literature | We yearn for silence, yet the less sound there is, the more our thoughts deafen us. How can we still the noise within? | Years ago, in my novel Cleaver (2006), I imagined a media man who is used to frantic bustle and talk going in search of silence. He flees to the Alps, looking for a house above the tree line – above, as he begins to think of it, the noise line; a place so high, the air so thin, that he hopes there will be no noise at all. But even in the South Tirol 2,500 metres up, he finds the wind moaning on the rock face, his blood beating in his ears. Then, without any input from his family, his colleagues, the media, his thoughts chatter ever more loudly in his head. As so often happens, the less sound there is outside, the more our own thoughts deafen us. When we think of silence, because we yearn for it perhaps, or because we’re scared of it — or both — we’re forced to recognise that what we’re talking about is actually a mental state, a question of consciousness. Though the external world no doubt exists, our perception of it is always very much our perception, and tells us as much about ourselves as it does about the world. There are times when a noise out there is truly irritating and has us yearning for peace. Yet there are times when we don’t notice it at all. When a book is good, the drone of a distant lawnmower is just not there. When the book is bad but we must read it for an exam, or a review, the sound assaults us ferociously. If perception of sound depends on our state of mind, then conversely a state of mind can hardly exist without an external world with which it is in relation and that conditions it — either our immediate present environment, or something that happened in the past and that now echoes or goes on happening in our minds. There is never any state of mind that is not in some part, however small, in relation to the sounds around it — the bird singing and a television overheard as I write this now, for example. Silence, then, is always relative. Our experience of it is more interesting than the acoustic effect itself. And the most interesting kind of silence is that of a mind free of words, free of thoughts, free of language, a mental silence — the state of mind my character Cleaver failed to achieve despite his flight to the mountains. Arguably, when we have a perception of being tormented by noise, a lot of that noise is actually in our heads — the interminable fizz of anxious thoughts or the self-regarding monologue that for much of the time constitutes our consciousness. And it’s a noise in constant interaction with modern methods of so-called communication: the internet, the mobile phone, Google glasses. Our objection to noise in the outer world, very often, is that it makes it harder to focus on the buzz we produce for ourselves in our inner world. There is, as it were, a catharsis of exhaustion, exhaustion with the dazzling, disturbing voice of the mind Yet all of us, at least occasionally, reach the point where the motor of thought feels out of control. Thoughts run away with themselves, go nowhere new, and are nevertheless destructive in their insistent revisiting of where we’ve been a thousand times before. So much of Modernist literature is about this buzz of consciousness, emphasising its poetic quality. One thinks of James Joyce, or Virginia Woolf. Some, however, understood how exhausting and destructive it could be: a character who can’t still her thoughts was ‘destroyed into perfect consciousness’, writes D H Lawrence in his novel Women in Love (1920). By contrast, a certain genre of late 20th-century literature — from Samuel Beckett through Thomas Bernhard to Sandro Veronesi, David Foster Wallace and many others — is dominated by a voice constantly trying to explain the world, constantly denouncing the scandal of the world, constantly disappointed and frustrated, but also pleased with itself, pleased with its ability to be scandalised, a voice whose ceaseless questioning and criticising has long become a trap, from which consciousness seeks release in various forms of intoxication, or sleep, or suicide. There is, as it were, a catharsis of exhaustion, exhaustion with the dazzling, disturbing voice of the mind. Such a mental voice is also a source of self-regard. This is the catch that springs the trap. The mind is pleased with the sophistication of its thinking. It wishes the monologue to end, and yet, simultaneously not to end. If it did end, where would identity be? It yearns for silence and fears silence. The two emotions grow stronger together. The more one yearns for silence, the more one fears the loss of identity if the voice should quieten. For example, when a person contemplates a radical change in life — going to live alone in the moors of Galway perhaps, or to a 10-day silent Buddhist retreat — the more he or she fears it, too, fears the moment of change. So our ideas of silence are tied up with questions of self-loathing and self-regard. The end of the monologue is inviting but also frightening, the way children are frightened of going to sleep. Our desire for silence often has more to do with an inner silence than an outer. Or a combination of the two. Noise provokes our anger, or at least an engagement, and prevents inner silence. But absence of noise exposes us to the loud voice in our heads. This voice is constitutive of what we call self. If we want it to fall silent, aren’t we yearning for the end of self? For death, perhaps. So talk about silence becomes talk about consciousness, the nature of selfhood, and the modern dilemma in general: the desire to invest in the self and the desire for the end of the self. Of course, we have strategies for getting by. There are soft solutions such as listening to music, or reading. Consciousness is invited to follow someone else’s score or storyline. We temporarily hand over the controls to another director. But as soon as we stop reading or listening, the mental noise starts again. We haven’t resolved anything or learnt anything about ourselves. We haven’t changed the nature of the discomfort. More radical, and mortifying perhaps, are solutions involving ritual prayer, rosaries, or mantras. Such an approach feels like a full-scale assault on the self, with an acoustic weapon. Despite, or perhaps because of, my religious childhood, I have never tried this. I’ve never desired a mantra. I suspect, as with music, once the mantra is over, the chattering self would bounce back more loquacious and self-righteous than ever. Or one might try Vipassana — a form of mediation that goes to the heart of this conflict between yearning for silence and fearing it. Without being too specific about why I originally approached Vipassana — let’s just say that I had health problems, chronic pain — someone suggested that this discipline might help. I had become aware that though my pains were not, as they say, merely in the mind, my mental state had certainly contributed to the kind of physical tensions that, over many years, had begun to make my life a misery. The first Vipassana retreat I attended, some five years ago now, was in the mountains north of Milan where I live and work. There seemed no point in going further afield merely to sit on a cushion. In the opening session, I was asked to take a vow of silence for the full 10 days of my stay. So, for all this time, I lived in silence, ate in silence. Above all, I sat for many hours a day, as many as 10, in silence. But there were no chants or mantras to still the mind and get one through. Rather, I was encouraged to substitute, slowly and patiently, my normally talkative consciousness with an intense awareness of breathing and sensation; that is, of the present animal state of our being. We use sound and movement to avoid the irksomeness of stasis. You shift from foot to foot, you move from room to room It’s fairly easy to concentrate on the body in motion. If you’re running or swimming, it’s possible to move into a wordless or semi-wordless state that gives the impression of silence for long periods. In fact one of the refreshing, even addictive, things about sport is the feeling that the mind has been given a break from its duty of constantly building up our ego. But in Vipassana you concentrate on sensation in stillness, sitting down, not necessarily cross-legged, though most people do sit that way. And sitting without changing position, sitting still. As soon as you try to do this, you become aware of a connection between silence and stillness, noise and motion. No sooner are you sitting still than the body is eager to move, or at least to fidget. It grows uncomfortable. In the same way, no sooner is there silence than the mind is eager to talk. In fact we quickly appreciate that sound is movement: words move, music moves, through time. We use sound and movement to avoid the irksomeness of stasis. This is particularly true if you are in physical pain. You shift from foot to foot, you move from room to room. Sitting still, denying yourself physical movement, the mind’s instinctive reaction is to retreat into its normal buzzing monologue — hoping that focusing the mind elsewhere will relieve physical discomfort. This would normally be the case; normally, if ignored, the body would fidget and shift, to avoid accumulating tension. But on this occasion we are asking it to sit still while we think and, since it can’t fidget, it grows more and more tense and uncomfortable. Eventually, this discomfort forces the mind back from its chatter to the body. But finding only discomfort or even pain in the body, it again seeks to escape into language and thought. Back and forth from troubled mind to tormented body, things get worse and worse. Silence, then, combined with stillness — the two are intimately related — invites us to observe the relationship between consciousness and the body, in movement and moving thought. Much is said when people set off to meditation retreats about the importance of ‘finding themselves’. And there is much imagined drama. People expect old traumas to surface, as though in psychoanalysis. In fact, what you actually discover is less personal than you would suppose. You discover how the construct of consciousness and self, something we all share, normally gets through time, to a large extent by ignoring our physical being and existence in the present moment. Some of the early names for meditation in the Pali language of the Buddhist scriptures, far from linking it to religion, referred only to ‘mental exercises’. This form of meditation alters the mind’s relationship with the body. It invites the meditator to focus attention on all parts of the body equally, without exception, to guide the consciousness through the body and to contemplate sensation as it ebbs and flows in the flesh, and this without reacting in any way — without aversion to pain, without attachment to pleasure. So we become aware that even when we are still, everything inside us is constantly moving and changing. Moreover, this ‘activity’ is not subordinated in the mind to any other. One renounces any objective beyond the contemplation itself. You are not meditating in order to relax, or to overcome pain, or to resolve a health problem, or to achieve inner silence. There is no higher goal but to be present, side by side with the infinitely nuanced flux of sensation in the body. The silence of the mind puts you in touch with the body. Or simply, silence of the mind is awareness of being. It is hard, at the beginning, to focus, first for minutes at a time, then for hours, on one’s breathing. It is hard, at first, to find any sensation at all in many parts of the body when they are still — the temples, the elbows, the calves. Yet once the mind does latch on to sensation, or when sensation responds to the mind’s patient probing, all at once it becomes easier. Suddenly the body becomes interesting and one’s obsessive interest in one’s own wordy thoughts begins to dissolve. Language melts away and in the silence all kind of changes occur in the body. The process is neither that of a single switch being turned, nor of a steady continuum, but of a series of small gains and losses; perhaps a larger step forward, then a small relapse. If one is persistent, undaunted, in one’s attempts to concentrate, if one is successful in showing neither aversion to pain nor indulgence in pleasure, then, very slowly, the stillness and silence deepen in an atmosphere of beatitude that is simultaneously and indivisibly both physical and mental. It is as if, as the body is slowly put together and all its component parts unite in an intense present, so the historical self is taken apart and falls away. At no point is it experienced as a loss, but rather as a fullness of existence; something brimful, very ordinary and very beautiful. The words we constantly use and the narratives we write reinforce a drama of selfhood that we in the West complacently celebrate. There is also much consolation taken in the way in which writing and narrative can transform emotional pain into a form of entertainment, wise and poignant in its vision of our passage through the world, intense and thrilled by its own intensity. Narrative is so often the narrative of misery and of the passage through misery. What silence and meditation leaves us wondering, after we stand up, unexpectedly refreshed and well-disposed after an hour of stillness and silence, is whether there isn’t something deeply perverse in this culture of ours, even in its greatest achievements in narrative and art. So much of what we read, even when it is great entertainment, is deeply unhelpful. | Tim Parks | https://aeon.co//essays/is-the-sound-of-silence-the-end-of-the-self | |
Ecology and environmental sciences | From Atlantis to Noah’s Ark, we have long been drawn to stories of submerged lands. What lies beneath the flood myths? | In 1931, a trawler called the Colinda sank its nets into the North Sea, 25 miles off the coast of Norfolk, and dredged up an unlikely artefact — a handworked antler, 21cm long, with a set of barbs running along one side. Archeologists identified it as a prehistoric harpoon and dated it to the Mesolithic age, when sea levels around Britain were more than 100 metres lower than they are today, and the island’s sunken rim, at least according to some, was a fertile plain. As long ago as the 10th century, astute observers noted that Britain’s coastlines were fringed with trees, visible only at low tide. Traditionally, the ‘drowned forests’ were regarded as evidence of Noah’s flood — relics of an antediluvian world whose destruction is recorded in the most enduring of all the stories of great floods that sweep the earth and drown its people. At the beginning of the 20th century, another explanation was proposed by the geologist Clement Reid. In his book Submerged Forests (1913), Reid argued that ‘nothing but a change in sea level will account’ for the position of trees stretching from the high water mark ‘to the level of the lowest spring tide’. Observations on the east coast of England led him to conclude that the Thames and Humber estuaries were once ‘flanked by a plain, lying some 40-60 feet below the modern marsh surface’. Turning his attention to a substance known as ‘moorlog’, dredged up from the bed of the North Sea at Dogger Bank, Reid identified nothing less than a time capsule. Moorlog consists of the compacted remains of animal bones, shells, wood, and lumps of peat, and Reid’s sample contained a variety of bones, including bear, wolf, hyena, bison, mammoth, beaver, walrus, elk and deer. He concluded that ‘Noah’s woods’ once stretched far beyond the shore, with Dogger Bank forming the ‘edge of a great alluvial plain, occupying what is now the southern half of the North Sea, and stretching across to Holland and Denmark’. The Colinda antler appeared to confirm Reid’s theory, for it came from a freshwater deposit, meaning that it had not been dropped by a sea voyager, but by someone living in the landscape. According to Vincent Gaffney, Simon Fitch and David Smith — the trio of archaeologists at the University of Birmingham who have made the most sustained attempts to build on Reid’s research — this was ‘the first real evidence that the North Sea had been part of a great plain inhabited by the last hunter-gatherers in Europe’. mankind’s wickedness upset the sun-god Nákúset so much that he wept a global deluge The area, now called Doggerland, was gradually submerged as the last Ice Age came to an end, and the melting glaciers raised sea levels. Only around 5,500 BCE did Britain finally become an island. The rediscovery of the great plain that formerly connected it to mainland Europe is one of the most remarkable scientific stories of the past decade, yet there is a sense in which it should not come as a surprise at all. Doggerland addresses one of our oldest preoccupations; for we have always told stories about lost civilisations, hidden beneath the waves. Tales of great deluges evolved for obvious reasons: the earliest human civilisations appeared in Mesopotamia (‘the land between the rivers’), and the deltas of the Yellow River, Indus and Nile. The Egyptian year — the basis of our calendar — divided the seasons by the pattern of rainfall, the season of inundation being when the Nile rose, flooding the surrounding fields. Water was the source of life but also destruction and, as such, it inspired the earliest recorded version of the most famous ‘flood myth’ of all. The story of a Utnapishtim, a just man who is instructed by a god to build an ark so as to survive a flood, appeared in The Epic of Gilgamish. In the Bible Utnapishtim is Noah, and in Greek mythology he is Deucalion, a son of Prometheus who recreates the human race by throwing his mother’s bones over his shoulder. There is a Hindu Noah, an Incan Noah, and a Polynesian Noah. A First Nation version of the legend maintains that mankind’s wickedness so upset the sun-god Nákúset that he wept a global deluge. Some creation myths — Genesis, for example — depict God creating land from a primeval waste of water, but another frequently recurring story reverses the process. In Plato’s dialogue Critias we have the oldest surviving account of a land that sinks beneath the waves: the lost continent of Atlantis. In Plato’s original rendering, Athens is said to have ‘checked a great power that arrogantly advanced from its base in the Atlantic Ocean to attack the cities of Europe and Asia’. That power was Atlantis — an empire ruled by descendants of Poseidon, ‘a powerful and remarkable dynasty of kings’. Athens rose up and subdued Atlantis, but her triumph was overtaken by natural disaster: in ‘a single dreadful day and night, all [Athens’s] fighting men were swallowed up by the earth, and the island of Atlantis was similarly swallowed up by the sea and vanished’. Scholars often insist that we’re not meant to take accounts of Atlantis literally. ‘The idea is that we should use the story to examine our ideas of government and power,’ says the philosopher Julia Annas in Plato: A Very Short Introduction (2003). ‘We have missed the point if instead of thinking about these issues we go off exploring the sea bed.’ The exact location of Atlantis aroused little interest in antiquity, and early modern writers such as Thomas More and Francis Bacon explored Platonic ideals of the good society, in Utopia (1516) and New Atlantis (1624) respectively, without becoming bogged down in questions of geography. Yet recently we have renounced the challenging task of interpreting Plato’s layered inquiry into the nature of the good society in favour of millennial fantasies about drowned worlds. In the children’s novel The Water-Babies (1863) by Charles Kingsley, Atlantis is merged with the legend of Tír na nÓg or ‘St Brandan’s fairy isle’, a ‘great land’ that sank beneath the waves to the west of Ireland. Kingsley’s story suggests that the surviving traces of Atlantis are not the sunken trees of ‘Noah’s woods’ but the ‘strange flowers, which linger still about this land’, such as ‘the Cornish heath, and Cornish moneywort’. He described the flowers as ‘fairy tokens left for wise men and good children’. Other writers have imagined the survival of more substantial remains of the sunken world. In Twenty Thousand Leagues Under the Sea (1870), Jules Verne describes the occupants of the submarine Nautilus climbing through a sunken forest on the slopes of an erupting volcano on the bottom of the Atlantic until they find themselves looking down on a ruined town — ‘its roofs open to the sky, its temples fallen, its arches dislocated, its columns lying on the ground’. For the benefit of the submariners, Captain Nemo chalks a single word onto a rock of black basalt: ‘ATLANTIS’. Aronnax, the marine-biologist who narrates the book, is transfixed by the thought that he’s standing ‘on the very spot where the contemporaries of the first man had walked’. Sir Arthur Conan Doyle offered another version of the legend, though his Atlantis was not, like Verne’s, a ‘perfect Pompeii escaped beneath the waters’ but a miraculously preserved and inhabited world. The underwater explorers of his short novel The Maracot Deep (1929) are assailed by a giant crustacean that snips the cable of their ‘bathysphere’, plunging it to the bottom of the sea. They are rescued by the surviving Atlantans and taken — as curiosities — to their submarine city. The occultists were right in one respect: we live on the edge of drowned worlds and are descended from their inhabitants Conan Doyle threaded his spiritualist beliefs throughout the tale in a way that illustrates how lost worlds might allow us to improve upon the real one, if only in imagination. But the Atlantis myth also allows us to rehearse our fears and fantasies of collapse and decay. In his novel After London (1885), the Victorian naturalist Richard Jefferies portrayed Britain as a kind of Atlantis, its capital reduced to a putrid swamp, its interior covered by an inland sea. Similarly, J G Ballard’s early novel, The Drowned World (1962), pictures futuristic London as a tropical lagoon. Ballard’s characters welcome the re-emergence of a primeval world: we fear the flood, his book implies — and yet embrace it too. In 1954, the American science fiction writer L Sprague de Camp said that the number of novels and stories about lost continents was ‘beyond count’. Yet the treatment of the subject was not confined to books that advertised themselves as fiction. The great champion of Atlantis as ‘veritable history’ was an American politician called Ignatius Donnelly, who served as a congressman and state senator for Minnesota at various times in the late 19th century. Donnelly was also a land speculator, farmer and fantasist, who proposed three equally striking and implausible theories — that Bacon had not only written Shakespeare’s plays but embedded a cipher within them; that great events in the Earth’s history, such as the Ice Age, were brought on by ‘extraterrestrial catastrophism’; and that Atlantis was a fragment of a once vast oceanic continent. In Atlantis: The Antediluvian World (1882), Donnelly identified Plato’s Atlantis as source of all humanity’s lost paradises: ‘the Garden of Eden; the Gardens of the Hesperides; the Elysian Fields’. It represented ‘a universal memory of a great land, where early mankind dwelt for ages in peace and happiness’. Donnelly believed that a few lucky survivors of Atlantis’s drowning had escaped ‘in ships and on rafts’ to populate surrounding continents, citing, as proof, resemblances between European and American plants and animals, culture and language. Sprague de Camp notes that ‘most of Donnelly’s statements of fact… either were wrong when he made them, or have been disproved by subsequent discoveries’. And even if they hadn’t been wrong, he would still have drawn the wrong conclusions from them: the fact that people on both sides of the Atlantic used spears and sails, customarily married and divorced, and believed in ghosts and flood legends ‘proves nothing about sunken continents’. Of course, being wrong is rarely a bar to success, especially in the realms of fantasy and conspiracy. Donnelly’s book was reprinted 22 times in the eight years after publication, becoming ‘the New Testament of Atlantism’. And Donnelly attracted a curious following among 19th-century occultists such as Helena Petrovna Blavatsky, who founded the Theosophical Society. Blavatsky believed that Atlantis was one of several lost continents on which humanity had evolved. She claimed that her esoteric magnum opus The Secret Doctrine (1888) was originally dictated on Atlantis, and that part of humanity was descended from another ‘root-race’ who lived on the vanished continent of Lemuria. Blavatsky did not invent the name ‘Lemuria’: it was proposed in 1864 by the English zoologist Philip Sclater as a way of explaining the presence of Lemur fossils in Madagascar and India, but not in Africa or the Middle-East. Lost land bridges were often invoked to explain how species had travelled across continents separated by water, and Sclater believed that India and Madagascar had been conjoined on a continent that broke apart into islands or sank beneath the waves. For Blavatsky, Lemuria was ‘the cradle of mankind, of the physical sexual creature who materialised through long aeons out of the ethereal hermaphrodites’. In The Lost Lemuria (1904), her fellow theosophist William Scott-Elliott offered a vivid description of this ‘creature’: His stature was gigantic, somewhere between 12 and 15 feet… His skin was very dark, being of a yellowish brown colour. He had a long lower jaw, a strangely flattened face, eyes small but piercing and set curiously far apart, so that he could see sideways as well as in front, while the eye at the back of the head… enabled him to see in that direction also. The theory of continental drift, accepted in the early 20th century, obviated the need to conjure a Lemuria that explained the spread of fossils, but the idea has continued to fascinate occult writers. The American writer Frederick S Oliver pulled Lemuria into an orbit of spiritualist ideas, including spirit guides, astral travel and channelling. His 1905 novel A Dweller on Two Planets (1905), supposedly written under ‘spirit guidance’, tells the story of a community of sages who escaped from Lemuria and settled on Mount Shasta, near Oliver’s home in northern California. Oliver’s book has been surprisingly influential. The idea of a Lemurian community living on Mount Shasta was frequently reported during the 1920s and ‘30s, and acquired a surprising degree of detail. There were said to be 1,000 magi, living in a ‘mystic village’ built around a Mayan-style temple. Occasionally, they appeared in neighbouring towns, clad in long white robes, ‘polite but taciturn’. They paid for supplies with gold nuggets and, every midnight, they rejoiced in their escape from Lemuria in ceremonies that bathed the mountains in red and green light. Various Californian luminaries, such as the actress Shirley MacLaine — who recalled a past life as an androgynous Lemurian in her memoir I’m Over All That (2012) — and Guy Warren Ballard, founder of the flamboyant religious movement I AM, were both inspired by Oliver’s novel. It still informs the philosophy of a New Age organisation called the Lemurian Fellowship, which maintains that ‘the continents of Atlantis and Mu [often synonymous with Lemuria] did exist, and still do, sunken beneath the waters of the Atlantic and Pacific Oceans’. The British occult writer James Churchward claimed to have learnt about Mu from ancient tablets discovered in temples in India. His book The Lost Continent of Mu: Motherland of Man (1926) was one of the last great manifestations of our preoccupation with lost worlds. It was published just five years before Pilgrim E Lockwood, skipper of the Colina, hauled up his handworked antler. The wild imaginings of the 19th-century cult of Atlantis were starting to be displaced by a picture of a lost world that was scarcely less astonishing, and which had the advantage of being true. As early as 1936, only a handful of archaeologists went against the grain of thinking that Dogger Bank was a mere ‘land bridge’ to speculate that its ancient plains would have been ‘especially favourable for settlement’. Over the last decade, however, their ideas have gained credence. In 2001, researchers at the University of Birmingham decided to investigate further. Using seismic data collected by Norwegian oil companies, the archeologists Vincent Gaffney, Simon Fitch and David Smith constructed 3D rendering of ‘shallow deposits’ lying only metres beneath the seabed. They revealed the course of a river, large as the Rhine, that once ran across Dogger Bank. The scale of the lost territory has astonished the three researchers. They do not exaggerate when they write in Europe’s Lost World: The Rediscovery of Doggerland (2009) that they’re exploring ‘an entire, preserved, European hunter-gatherer country’ — a lost land that, at its most extensive, was as large as the UK. The team has now mapped the contours of Doggerland and identified 1,600km of river channels and 24 lakes or marshes. The emerging picture is that of ‘a massive plain dominated by water’. To the modern eye, this environment might appear featureless, even unattractive, yet to Mesolithic communities, it offered rich living. Gaffney, Fitch and Smith understand that, with or without the science, Doggerland remains one of the most ‘enigmatic’ archaeological landscapes in the world, and that our lack of historic knowledge about the people who inhabited it will prove attractive to fantasists and conspiracy theorists alike. Besides, Doggerland is not the only lost continent to be rediscovered. There’s Sundaland, the coastal shelf in the south China Sea, and Beringia, the land bridge that joined Asia to Alaska. Although yet to be explored in any detail, their existence confirms that the occultists were right in one respect, at least: we live on the edge of drowned worlds and are descended from their inhabitants. Yet one thing has changed: archaeologists now believe that Doggerland’s hunter-gatherers didn’t settle the British Isles until they were forced to abandon their traditional homes on the low-lying plain. Humanity, of course, has always lived with the threat of great floods; but as water levels begin rising once again, we should remind ourselves that our densely inhabited world contains no comparable havens to which we might retreat. | Edward Platt | https://aeon.co//essays/why-lost-civilisations-under-the-waves-still-fascinate-us | |
Love and friendship | Felines walk the line between familiar and strange. We stroke them and they purr, then in a trice they pounce | Saturday was a small snake. Each morning for six days, Berzerker — half-Siamese, half-streetcat, with charcoal fur and a pure white undercoat — had deposited a new creature on the doormat. On this last day, the snake was as stiff as a twig; rigor mortis had already set in. I wondered if there was a mortuary under the porch, a cold slab on which the week’s offerings had been laid out. What were these ritualistic offerings all about? Gift, placation, or proof of lethal skill? Who knows. On the seventh day he rested. When I look at any one of my three cats — when I stroke him, or talk to him, or push him off my yellow pad so I can write — I am dealing with a distinct individual: either Steely Dan Thoreau, or (Kat) Mandu, or Kali. Each cat is unique. All are ‘boys’, as it happens. All rescued from the streets, neutered and advertised as mousers, barn cats: ‘They will never let you touch them,’ I was told. Each cat is a singular being — a pulsing centre of the universe — with this colour eyes, this length and density of fur, this palate of preferences, habits and dispositions. Each with his own idiosyncrasies. At first, they were truly untouchable, hissing and spitting. A few weeks later, after mutual outreaching, they were coiling around my neck, with heavy purring and nuzzling. They do indeed hang out in my barn — I live on a farm — and are always pleased to see me at their daily feed. Steely Dan, unlike the other two, will walk with me for miles. Just for the company, I suspect. Occasionally he will turn up at the house and demand to be let in. He is a favourite among my friends for his free dispensing of affection. But the rift between our worlds opens wide again when he shreds the faux leather sofa with his claws. When scolded, he is insouciant. ‘When I play with my cat,’ Montaigne mused, ‘how do I know that she is not playing with me rather than I with her?’ Since the Egyptians first let the wild Mau into their homes, cats and humans have co-evolved. We have, without doubt, been brutal — eliminating kittens of the wrong stripe, as well as couch-potato cats that gave the rats a pass, cats that could not be trained, and cats that refused our advances. My Steely Dan, steely eyed professional killer of birds and mice (and snakes, lizards, young rabbits, voles, and chipmunks), lap-lover, walking companion extraordinaire, is the product of trial by compatibility. This sounds like a recipe for compliance: domestication should have rooted out the otherness of the feline. But it did not. The Egyptians domesticated Felis silvestris catus 10,000 years ago and valued its services in patrolling houses against snakes and rodents. But later they deified it, even mummifying cats for the journey into the afterlife. These days we don’t typically go that far — though cats and cat shelters are frequently the subjects of bequests. We remain fascinated both by our individual cats and cats as a species. They are a beloved topic for publishers, calendars and cartoons. Cats populate the internet: there are said to be 110,000 cat videos on YouTube. Lolcats tickle us at every turn. But isn’t there something profoundly unsettling about the whiskered cat lying on a laptop (or somesuch), speaking its bad English? Lolcats make us laugh, but the need to laugh intimates disquiet somewhere. Perhaps because we selected cats for their internal contradictions — friendly to us, deadly to the snakes and rodents that threatened our homes — we shaped a creature that escapes our gaze, that doesn’t merely reflect some simple design goal. One way or another, we have licensed a being that displays its ‘otherness’ and flaunts its resistance to human interests. This is part of the common view of cats: we value their independence. From time to time they might want us, but they don’t need us. Dogs, by contrast, are said to be fawning and needy, always eager to please. Dogs confirm us; cats confound us. And in ways that delight us. In welcoming one animal to police our domestic borders against other creatures that threatened our food or health, did we violate some boundary in our thinking? Such categories are ones we make and maintain without thinking about them as such. Even at this practical level, cats occupy a liminal space: we live with ‘pets’ that are really half-tamed predators. It is something of an accident that a cat’s lethal instincts align with our interests From the human perspective, cats might literally patrol the home, but more profoundly they walk the line between the familiar and the strange. When we look at a cat, in some sense we do not know what we are looking at. The same can be said of many non-human creatures, but cats are exemplary. Unlike insects, fish, reptiles and birds, cats both keep their distance and actively engage with us. Books tell us that we domesticated the cat. But who is to say that cats did not colonise our rodent-infested dwellings on their own terms? One thinks of Ruduyard Kipling’s story ‘The Cat That Walked by Himself’ (1902), which explains how Man domesticated all the wild animals except for one: ‘the wildest of all the wild animals was the Cat. He walked by himself, and all places were alike to him.’ Michel de Montaigne, in An Apology for Raymond Sebond (1580), captured this uncertainty eloquently. ‘When I play with my cat,’ he mused, ‘how do I know that she is not playing with me rather than I with her?’ So often cats disturb us even as they enchant us. We stroke them, and they purr. We feel intimately connected to these creatures that seem to have abandoned themselves totally to the pleasures of the moment. Cats seem to have learnt enough of our ways to blend in. And yet, they never assimilate entirely. In a trice, in response to some invisible (to the human mind, at least) cue, they will leap off our lap and re-enter their own space, chasing a shadow. Lewis Carroll’s image of the smile on the face of the Cheshire cat, which remains even after the cat has vanished, nicely evokes such floating strangeness. Cats are beacons of the uncanny, shadows of something ‘other’ on the domestic scene. Our relationship with cats is an eruption of the wild into the domestic: a reminder of the ‘far side’, by whose exclusion we define our own humanity. This is how Michel Foucault understood the construction of ‘madness’ in society — it’s no surprise then that he named his own cat Insanity. Cats, in this sense, are vehicles for our projections, misrecognition, and primitive recollection. They have always been the objects of superstition: through their associations with magic and witchcraft, feline encounters have been thought to forecast the future, including death. But cats are also talismans. They have been recognised as astral travellers, messengers from the gods. In Egypt, Burma and Thailand they have been worshipped. Druids have held some cats to be humans in a second life. They are trickster figures, like the fox, coyote and raven. The common meanings and associations that they carry in our culture permeate, albeit unconsciously, our everyday experience of them. But if the glimpse of a cat can portend the uncanny, what should we make of the cat’s own glance at us? As Jacques Derrida wondered: ‘Say the animal responded?’ If his cat found him naked in the bathroom, staring at his private parts — as discussed in Derrida’s 1997 lecture The Animal That Therefore I Am — who would be more naked: the unclothed human or the never clothed animal? To experience the animal looking back at us challenges the confidence of our own gaze — we lose our unquestioned privilege in the universe. Whatever we might think of our ability to subordinate the animal to our categories, all bets are off when we try to include the animal’s own perspective. That is not just another item to be included in our own world view. It is a distinctive point of view — a way of seeing that we have no reason to suppose we can seamlessly incorporate by some imaginative extension of our own perspective. Jacques Derrida and his cat, Logos. Photo by Sophie Bassouls/Sygma/CorbisThis goes further than Montaigne’s musings on who is playing with whom. Imaginative reversal — that is, if the cat is playing with us — would be an exercise in humility. But the dispossession of a cat ‘looking back’ is more disconcerting. It verges on the unthinkable. Perhaps when Ludwig Wittgenstein wrote (of a larger cat) in Philosophical Investigations (1953) that: ‘If a lion could talk we would not understand him,’ he meant something similar. If a lion really could possess language, he or she would have a relation to the world that would challenge our own, without there being any guarantee of translatability. Or if, as T S Eliot suggested in Old Possum’s Book of Practical Cats (1939), cats named themselves as well as being given names by their owners (gazed on by words, if you like), then the order of things — the human order — would be truly shaken. Yet the existence of the domestic cat rests on our trust in them to eliminate other creatures who threaten our food and safety. We have a great deal invested in them, if now only symbolically. Snakebites can kill, rats can carry plague: the threat of either brings terror. Cats were bred to be security guards, even as their larger cousins still set their eyes on us and salivate. We like to think we can trust cats. But if we scrutinise their behaviour, our grounds for doing so evaporate. Look into the eyes of a cat for a moment. Your gaze will flicker between recognising another being, and staring into a void It is something of an accident that a cat’s lethal instincts align with our interests. They seem recklessly unwilling to manage their own boundaries. Driven as they are by an unbridled spirit of adventure (and killing), they do not themselves seem to have much appreciation of danger. Even if fortune smiles upon them — they are said to have nine lives, after all — in the end, ‘curiosity kills the cat’. Such protection as cats give us seems to be a precarious arrangement. No story of a cat’s strangeness would be complete without touching on the tactile dimension. We stroke cats, and they lick us, coil around our legs, nuzzle up to us and pump our flesh. When aroused, they bite and plunge their claws innocently and ecstatically through our clothes into our skin. Charles Baudelaire expresses this contradictory impulse, somewhere between desire and fear, in his poem ‘Le Chat’ (1857): ‘Hold back the talons of your paws/Let me gaze into your beautiful eyes.’ A human lover would be hard put to improve on a normal cat’s response to being stroked. Unselfconscious self-abandonment, unmistakable sounds of appreciation, eyes closing in rapture, exposure of soft underbelly. Did the human hand ever find a higher calling? Baudelaire continues: ‘My hand tingles with the pleasure/Of feeling your electric body’. It feels like communion, a meeting of minds (or bodies), the ultimate in togetherness, perhaps on a par with human conjugal bliss (and simpler). But the claws through the jeans give the game away. The cat is not exploring the limits of intimacy with a dash of pain, a touch of S&M. He is involuntarily extending his claws into my skin. This is not about ‘us’, it’s about him, and perhaps it always was — the purring, the licking, the pumping. Cats undermine any dream of perfect togetherness. Look into the eyes of a cat for a moment. Your gaze will flicker between recognising another being (without quite being able to situate it), and staring into a void. At this point, we would like to think — well, that’s because she or he is a cat. But cannot the same thing happen with our friend, or child, or lover? When we look in the mirror, are we sure we know who we are? Witch’s cats were called familiars, an oddly suitable term for cats more generally — the strange at the heart of the familiar, disturbing our security even as they police it and bring us joy. They are part of our symbolic universe as well as being real physical creatures. And these aspects overlap. Most cats are unmistakably cut from the same cloth. But this only raises more intensely the question of this cat, its singular irreplaceability. I might well be able to replace Steely as a mouser, to find another sharp set of teeth. Steely II might equally like his tummy rubbed and press his claws into my flesh. And to my chagrin, Steely I and Steely II could each offer themselves in this way to my friends, as if I were replaceable. I was once offered a replacement kitten shortly after my ginger cat Tigger died. I was so sad that I toyed with the idea of giving the kitten the same name, and pretending that Tigger had simply been renewed. In the end, I could not. But the temptation was real. To quote Eliot again: You may think at first I’m as mad as a hatterWhen I tell you a cat must have THREE DIFFERENT NAMES.First of all, there’s the name that the family use dailyBut I tell you, a cat needs a name that’s particular,A name that’s peculiar, and more dignified,But above and beyond there’s still one name left over,And that is the name that you never will guess;The name that no human research can discover —But THE CAT HIMSELF KNOWS, and will never confess.Cats, one at a time, as our intimates, our familiars, as strangers in our midst, as mirrors of our co-evolution, as objects of exemplary fascination, pose for us the question: what is it to be a cat? And what is it to be this cat? These questions are contagious. As I stroke Steely Dan, he purrs at my touch. And I begin to ask myself more questions: to whom does this appendage I call my hand belong? What is it to be human? And who, dear feline, do you think I am? | David Wood | https://aeon.co//essays/the-uncanny-familiar-can-we-ever-really-know-a-cat | |
Biology | Ancient yet playful, endangered but resurgent, the North Atlantic right whale is a living reminder of how little we know | Late spring, Cape Cod Bay; dark blue water, pale blue sky. Skipper the ship’s dog barked loudly over the side of the research vessel, Shearwater. ‘That’s a big rabbit, Skipper!’ joked Brigid McKenna, one of the two young women on the top deck of the small boat where we had spent the past seven hours staring out to sea. Finally, she and Skipper and I saw what we had traversed the entire bay to find: the distinctive, V-shaped blow of Eubalaena glacialis, the North Atlantic right whale — one of the world’s most endangered animals. It’s a bit like seeing a dinosaur. The right whale’s story is emblematic of all great whales. Hunted to near-extinction in the 17th, 18th and 19th centuries by a sequence of nations from the Basques to the British and the Dutch, it was so-called because it was the right whale to catch: its blubbery carcass did not sink once killed but remained floating, conveniently, at the surface. That rightness proved fatal. By the early 1900s, its numbers were so reduced that it became the first whale to receive protection, under the aegis of the League of Nations, in 1935. By the late 20th century, it seemed doomed by a dwindling gene pool and the ever-increasing depredations of human activity, as well as by its own predilection for feeding in urban seas, close to one of the world’s busiest shipping lanes. Ship-strike, entanglement in fishing gear, and the more insidious effects of chemical pollution, anthropogenic noise and warmer, more acidic seas all combine to make this perhaps the most hapless of cetacean victims of human progress. And yet, slowly, surprisingly, against all the odds, the right whale is making a comeback. Scientists decline to trumpet this advance. Conservationists remain wary, for fear it will undermine the still fragile nature of the recovery and allow a complacent public to assume the battle has been won. But the numbers are clear, and they show a definite increase — from just 350 individuals, when Dr Charles ‘Stormy’ Mayo first started his studies here in Cape Cod in the 1970s, to 500 animals now. Well, 516, to be exact — an average increase of 2.5 per cent per annum, a statistic that comes courtesy of aerial surveys and photo-identification. Mayo is director of right whale research at the prestigious (but financially threatened) Provincetown Centre for Coastal Studies and, ever the cautious scientist, his optimism is tempered. ‘Five hundred whales make you want to sing,’ he told me, ‘but you have to hold your breath when you do.’ The Pilgrim Fathers recorded that the whales were so numerous that they could virtually stride across the bay on their broad backs The metaphor is apt, given the great maw of this magnificent animal. It is remarkable that something so huge should depend on minute zooplankton, the copepods and krill that constitute its diet. A single right whale, which can weigh up to 90 tons and measure 60 feet in length, needs to eat a ton of these grain-sized crustaceans every day to sustain its vast bulk. It does this by straining its prey through a vast arched mouth, big enough to garage a small car. Yawning wide, it moves slowly through fields of zooplankton like a living lawnmower, its pliable plates of baleen (the whalebone that once cinched in fashionable Victorian waists) straining its food like rice in a sieve. Herman Melville celebrated this spectacle in his novel Moby-Dick (1851), in a chapter titled ‘Brit’ — surely the only passage in a work of classic fiction dedicated to zooplankton. Melville describes the whales ‘making a strange, grassy, cutting sound’, like cows — for all that ‘their vast black forms looked more like lifeless masses of rock’. It is here off Cape Cod that right whales find their food on their return in early spring. Many of them spend the winter off Florida and Georgia, where the females calve. But the erratic weather of 2013 has seen a new shift in their foraging. ‘I’ve never seen a year like it,’ Dr ‘Stormy’ Mayo remarked. He has been studying these animals since 1976. Coming from a long line of New Englanders — Mayo’s family has been living on the Cape since 1650 — he knows the wild fluctuations of the whales’ behaviour better than most. Hence today’s cruise, in high seas, across the bay. The route was remarkably similar to that taken by the Pilgrim Fathers who arrived here on the Mayflower four centuries earlier, and who recorded that the whales were so numerous that they could virtually stride across the bay on their broad backs. Up close, right whales look prehistoric, with an indefinable series of lumps crowned with a crusted ‘bonnet’ — callosities that grow in unique patterns on a whale’s head, approximately where hair sprouts on a human head. These are the tools of Mayo’s trade: the patterns are distinctive enough to let researchers identify individual animals. As Shearwater transected the bay, the researchers Christy Hudak and Beth Larson wielded a plankton net, trawling for the telltale count of Calanus finmarchicus. The little plastic sample jars swirled with pink clouds of the zooplankton, like soup. In an attempt to empathise with our subjects, I fished out a fingerful of copepods and tasted them. A faint sea-salt oiliness lingered on the tongue. Not exactly a bouillabaisse, but it sustains leviathans. On deck, Lauren Bamford and Brigid recorded the blows that were erupting all around us. I clambered up to join the two women. Every few minutes, a new whale popped up, its crusty head and sea-slick body glinting in the sun. ‘They’re acting cryptically,’ said Brigid. ‘Sub-surface feeding.’ For some reason, their prey had sunk to three or five metres below the surface. Why? That was up to Stormy and his team to discover. Bowheads hunted by the Inuit in Alaska were found to have ancient harpoons embedded in their blubber of a type that hadn’t been used for more than a century Though there were many whales out there — and that afternoon we cruised through a herd of up to 80 animals — they made for frustrating viewing. You had to put them together in your imagination, like a whalish jigsaw puzzle, creating a composite creature: a pair of sleek flukes here, an arched head there. Then, suddenly, in the distance, an animal would launch itself entirely out of the water — breaching improbably into the sun, as if in defiant celebration of its survival. Survival is the word. These whales might not be the smartest of all great whales — the sperm whale, with the world’s biggest brain, can probably lay claim to that. They lack the dimension-busting length of the blue whale. Yet it is becoming increasingly clear that the rights could outlive all other whales. That they might, in fact, be one of the oldest living animals on Earth. There was extraordinary excitement last year when the centre’s observers discovered a strange new interloper in the midst of the right whale herd. This individual, spotted by the aerial surveyors, lacked the callosities of the other whales. Photographs revealed that the anomalous visitor was in fact a bowhead whale — a habitué of the Arctic, thousands of miles to the north, and a close cousin to the right whale. What was it doing here? No one really knows. But its unusual presence was recently underlined by equally sensational news from the other end of the Atlantic: the appearance, off Walvis Bay in Namibia, of a Pacific grey whale — extinct in this ocean for three centuries and never before recorded in the southern hemisphere. This lone, and presumably lonely, whale accomplished an astonishing 5,000-mile journey to reach the coast of southern Africa. All these cetaceans turning up where they shouldn’t be: are they recovering old migratory routes, abandoned during the whaling years? Or are they suffering new anthropogenic threats, to add to the chemical and aural pollution we pump into their environment? As ever in science, potential answers propose only more questions. But an even more intriguing link between right whales and bowheads is beginning to emerge. The animals I watched in Cape Cod Bay — only tens of miles from the historic city of Boston — could be older than American republic itself In the 1990s, bowheads hunted by the Inuit in Alaska were found to have ancient harpoons embedded in their blubber of a type that hadn’t been used for more than a century. Their presence indicated that the native peoples’ ancestors had attempted to hunt the same individual animals hundreds of years ago. Other evidence took the story even further back. Scientific tests on the harpoons proved them to be 235 years old, while other analyses, using amino acids found in whales eyes, confirmed these extraordinary tallies. Reasoning that it is unlikely that these specimens were the most ancient of the species, scientists hypothesise, with reasonable certainty, that bowheads can live up to 300 years, perhaps even older. And since the bowhead is so closely related to the right whale, which it resembles in all but its lack of callosities and slightly larger size, there is a newly emerging view among scientists such as Mayo and Dr John Wise, of the University of Southern Maine, that the right whale might enjoy similar longevity. The animals I watched in Cape Cod Bay only tens of miles from the historic city of Boston could predate the American republic itself. The discoveries do not end there. Earlier this year, the bowhead was found to have a 12-foot long organ in its lower jaw, composed of much the same spongelike tissue as a penis. The appendage swells with blood to regulate the whale’s temperature (oddly enough, blubber is such an effective insulator that whales have trouble keeping cool even in polar seas). This organ, catchily named Corpus Cavernosum Maxillaris, also boasts highly sensitive nerve endings. The scientist who discovered it, Alexander Werth, professor of biology at Hampden Sydney College in Virginia, suggests that these might be used to sense how much food there is in the water and how much the animal is swallowing — a kind of built-in calorie-counter, albeit in the form of a 12-foot penis. It is thought that the right whale possesses a modified Corpus, too — just more proof that, although apparently cumbersome and adipose, these animals are highly sensitive. One morning, grounded by inclement weather out at sea, I cycled to the shore on the outermost end of the cape. I had to wade through sand to get there, breaking the salty crust that winter had laid over summer dunes. I passed the lighthouse that I’d seen from far away, and for the last few hundred yards I had to abandon my bike and trudge between stumps of dormant marram grass that snatched at my heels. I thought of Guglielmo Marconi testing his sound waves on this coastline in 1903, and his belief that the same ether on which he broadcast his messages across the ocean might also carry the drowning cries of long-dead men. As I crested the last dune, the view fell away before me and the sea unrolled like a painted strip. I might have been the first person to discover this beach — just as the Pilgrims thought they had discovered it four centuries ago. The tide was on its way out, leaving a wide stretch of damp sand on which a huge flock of gulls was feeding. As I descended into this natural arena, the sea birds took to the air, a thousand white wings rising like a curtain. And then I saw, just a few yards out into the water, an impossible arrangement of black triangles and humps. Moving restlessly through the water, this way and that, was a ‘SAG’ — a surface active group of right whales. Here was a sight beyond all spectacle or expectation — and all the more powerful for being experienced alone, and for the accumulated history that these animals bore on their broad backs. The whales wove through one another’s paths, sometimes diving down, moving, as we humans seldom do, in three dimensions, fully occupying their world. They took no notice of the watcher on the sand as they curled around each other in what amounted to an extended act of communal foreplay. (Right whale males possess the largest testes of any animal, and the female can even admit more than one right whale penis at the same time.) As I watched, they seemed to stroke each other with their flukes and fins, as if in reassurance or celebration of their physical survival. Fecund yet endangered, immensely old yet frolicsome, the right whale slyly withholds its secrets, only gradually revealing its remarkable qualities to us. It is a living reminder of how little we know, and of how much we have already lost. To hear Benedict Cumberbatch reading ‘Brit’ from Moby-Dick, go to mobydickbigread.com/chapter-58-brit/ | Philip Hoare | https://aeon.co//essays/why-is-the-right-whale-returning-to-the-waters-off-cape-cod | |
Cosmology | Ditching the leap second would mean decoupling clock time from the Earth’s rotation – from day and night itself | Every now and then — once in two years, say — an extra second of time is legislated into the end of a day. This so-called leap second leads time on a tiny detour, from 23:59:59 to 24:00:00, via 23:59:60. It’s just a second. Most people take no notice. But everyone is affected, because we all share the same civil time, modulated by constellations of orbiting satellites, diplomatic protocols and computer networks. The hour gets doled out to us much the way water comes from the tap — processed and sanctioned, sometimes with an additive or two. Since 1972, when the leap second was inaugurated, 25 extra seconds have been parcelled out to the public. Each one helped bridge the difference between the way we mark time’s march from day to day, and the way we divide the day. Despite its small size, occasional occurrence and quirky name, the leap second constitutes a cornerstone of our current civil timekeeping system. In 2015, leap seconds will come up for international review. Four decades of use have allowed time for certain factions to find fault with the leap second, and to question whether periodically resetting all the clocks in the world is actually worth the bother. I can certainly sympathise with that sentiment. The semi-annual switch between standard time and summer time all but undoes me. Only one of my clocks picks up the radio signal telling its hands when to ‘spring forward’ or ‘fall behind’, and I resent having to fiddle with all the others. Then, too, I find the single hour’s time difference more disturbing to my equilibrium than a multi-time-zone dose of travel-induced jet lag, probably because the seasonal change is imposed upon me. Nothing would please me more than to see my government abolish daylight-saving time, which is already widely ignored by other countries. It is ignored with impunity even by individual states and counties within the continental United States. Ending it at a stroke would not cause the slightest wrinkle in time. Embedded in a wealth of time-stamped data sets, their presence will be felt indefinitely, like 25 peas under the mattress of history The leap second, on the other hand, though only one-60th of one-60th of one hour, is not so easily dispatched. Neither the US Congress nor the British Parliament wields the clout to topple it. A leap second doesn’t just reset the clock; it changes the length of a minute. Leap seconds are persistent, too. Eliminating future ones will not expunge the 25 already in existence. Embedded in a wealth of time-stamped data sets, their presence will be felt indefinitely, like 25 peas under the mattress of history. Specialised panels and working groups are busy now, studying the leap second in all its ramifications in preparation for 2015, when a global consortium of nation states will vote on whether to continue adding leap seconds in future. The decision to let them go would be historic — severing the age-old connection between time-reckoning and the heavens. I wonder, do you still wear a wristwatch? Or do you tell time by your mobile phone? The latter has the advantage of automatically resetting itself to local time wherever you go, thanks to an alphabet soup of entities including UTC, TAI, GPS, BIPM, IERS, and ITU-R — about which more later. Another plus of mobile-phone time is that your phone tells the same time as anyone else’s. You need not synchronise mobiles with the friends you’re planning to meet, nor fear that the device might run fast or slow. It remains reliably on the receiving end of disseminated civil time. The original wide-area time service available for human use is referenced in the Bible: on the fourth ‘day’ of Genesis, God put ‘lights’ in the firmament to serve as timekeepers to nail down the otherwise fuzzy concepts of ‘evening’, ‘morning’, and ‘day’. Duly charged ‘to divide the day from the night’, the heavenly lights were meant to provide also ‘for seasons, and for days and years’. Thus the rising of the Sun (that is, the turning of the Earth on its axis) gives us each day. The movement of the Sun through the zodiac (as seen from an Earth in solar orbit) defines the year. The Moon, meanwhile, gives us our months, in its cycle from waxing to waning. In contrast, the finer slices of time — hours, minutes, seconds — are human inventions: time-management inventions, with no counterpart in nature. A second of time has its roots in the sexagesimal numerology of the ancient Sumerians, but it was too fleeting an interval to be registered on a mechanical clock until the 16th century. By 1960, a single second could be subdivided into slivereens by an atomic clock. The regularity of atomic-frequency timepieces promised a far more reliable standard than the spinning or revolving of the wobbly old Earth, and forced authorities to rethink their concept of a second’s duration. In 1967, the official definition of a second changed accordingly. What had once been a tiny fraction (1/86,400) of a 24-hour day became the time taken by an atom of caesium-133 to flip through certain characteristic quantum-mechanical changes 9,192,631,770 times. Given that all atoms of caesium-133 behave the same way, it’s tempting to regard the unwieldy number 9,192,631,770 as some sort of fundamental rhythm — a microcosmic time cue, or zeitgeber. But in fact, the International Bureau of Weights and Measures in Paris (abbreviated as BIPM for Bureau International des Poids et Mesures) chose that atomic frequency as the new standard unit because it matched the old standard. In other words, 9,192,631,770 caesium fluctuations almost exactly equal an 86,400th of a mean solar day. Nevertheless, the atom trumps the planet because its rate remains constant. Its pulse is not expected to vary by so much as a nanosecond over the course of a millennium. Meanwhile, the Earth hiccups and falters. Its rotation, as monitored by the International Earth Rotation and Reference Systems Service (IERS), speeds up and slows down in response to the motions of the tectonic plates that pave its surface, the flow of molten iron in its core, the recession of glaciers, the circulation of the atmosphere, and, most important, the pull of the Moon. Ocean tides raised by the Moon put brakes on the Earth’s rotation, slowing the spin at a gradual but unpredictable rate. The deceleration has been ongoing for aeons. At the present pace, compared to the length of a day in atomic time, today stretches 2.5 milliseconds longer than yesterday, and tomorrow will gain 2.5 milliseconds over today. These small daily increments add up to almost one second over the course of a year, but the precise tally rises or falls at the whim of other factors influencing the Earth’s rotation. The more the Earth slows down in future, the more frequently a leap second will intervene in our affairs. They might come as often as quarterly by the year 2250, and then monthly by 2600. I wouldn’t mind additional leap second insertions. But then, I don’t programme computers, or control air traffic, or perform any of the myriad time-sensitive activities that would make me a stakeholder in the leap-second debate. I am merely a person who still wears a wristwatch, owns a sundial, and takes an abiding interest in all aspects of finding, keeping, and telling time. Which is how, in May this year, I wound up at a conference in Charlottesville, Virginia, surrounded by individuals with passionate feelings and opinions about leap seconds. The attendees — 19 in person, plus one by Skype — came from the US, the UK, Canada, France, Germany, Japan, and the Vatican Observatory, representing fields ranging from software and aerospace engineering to orbital mechanics, geodesy, and telecommunications, not to mention precision timekeeping. By chance, I happened to sit next to Daniel Gambis of the Observatoire de Paris. If any individual could be said to carry the weight of the leap second on his shoulders, that person is Doctor Gambis. As director of the Earth Orientation Centre of the IERS, he issues the semi-annual bulletin that tells the world whether and when to insert — or, if necessary, remove — a leap second. The six-month advance warning allows ample time to prepare. Leap-second insertions are scheduled on one of two calendar dates, 30 June or 31 December — or both, as happened in 1972, when there was more catching up to do. No leap seconds at all accrued, however, in the seven-year stretch between 1998 and 2005, demonstrating the erratic nature of the Earth’s ever-changing situation. The 25 leap seconds accumulated to date have extended 15 New Year’s Eves and 10 short summer nights. The most recent leap second occurred last year, on 30 June 2012. I had envisioned Gambis as a modern Atlas, upholding the globe, but found him a trim, pleasant, soft-spoken scientist with an expertise in Earth-rotation parameters and no apparent delusions of grandeur. Before boarding his flight to the US, he had already announced that there would be no leap second in 2013. The 11 definitions my Random House unabridged dictionary gives for the word ‘leap’ have mostly to do with saltation — as when Superman leaps tall buildings in a single bound. Only definition number 10, ‘an abrupt transition’, comes close to the time sense of the word, and even this fails to limn leap’s contributions to calendars and clocks. ‘Leap’ allows us to pretend, for example, that a year contains 365 days, when in fact the total is closer to 365.25. For the convenience of whole numbers, we accept the anomaly of a 366-day year every four years (more or less). Year in, year out, ‘leap’ makes peace between the pace of the Earth’s rotation and the rate of its revolution. On the split-second level, ‘leap’ mediates between the precision of atomic time and the position of our Sun in the sky. It is worth noting that while a leap year is a year with an extra day (Leap Day — February 29, when turnabout is fair play), a leap second lasts no longer than any other second. Applied to a minute, a positive leap second creates a 61-second interval that is not called a leap minute. (Nor would the 59-second outcome of a negative leap, should one ever be required, be called a leap minute.) A leap minute, rather, is a hypothetical way of putting off till tomorrow what leap seconds do today. If instituted, it would allow the powers responsible for time measurement and distribution to defer insertion till the leap-second debt reached 60, and trust some future authority to intercalate them all at once. But a leap minute would likely add up to a much bigger headache than the sum of its 60 leap seconds. A future Doctor Gambis would need to set the date for a leap-minute intercalation years in advance. And how could he do that, when the vagaries of Earth’s rotation defy long-term prediction? It is no exaggeration to say that the leap second defines our current reckoning of civil time, known as Coordinated Universal Time, the world’s legal timescale. The slightly scrambled acronym for Coordinated Universal Time — UTC instead of CUT — effects a compromise between the English phrase and its French equivalent, Temps universel coordonné. Several kinds of co-ordination are implicit in the term, beginning with the co‑ordination of time services the world over, to be sure they all broadcast the same time. Then there is the co-ordination of the approximately 200 atomic clocks at more than 50 national laboratories (including the US Naval Observatory in Washington and the UK’s National Physical Laboratory in Middlesex) that the BIPM in Paris relies upon to establish its time frequency standard. Last but not least, there is the co-ordination of atoms with astronomy. In the reckoning of purely atomic time — International Atomic Time, aka TAI (for Temps atomique international) — one atomic second follows another, no matter what happens to the Earth in space. UTC, too, ticks off atomic seconds, but does so in deference to the heavens. The inclusion of occasional leap seconds ensures that noon UTC falls at, or very close to, the moment the Sun crosses the Greenwich meridian. If the stroke of noon UTC and the Sun’s noonday rendezvous with the Greenwich meridian drift apart by more than a fraction (nine-10ths) of a second (averaged over the course of a year), a leap second comes to the rescue. The demise of UTC as we know it would alter the meaning of ‘day’ that underpins legal, cultural and religious practices the world over The Greenwich meridian embodies such historic importance that, if it passed through the US, a theme park would doubtless have sprouted around it. Even in London, nestled among the 17th-century buildings of the Royal Observatory in Greenwich, it excites a carnival atmosphere, and few visitors leave the site without Tweeting a picture of themselves straddling the meridian, one foot in either hemisphere. The meridian line on the ground, emblazoned by stainless steel borders, corresponds to the intangible celestial meridian overhead, which is illuminated at night by a green laser projecting nearly 40 miles northward across the London sky. Greenwich astronomers of old used to train telescopes on that north-south line in the heavens to observe and record the times of the nightly transits, or passages across the meridian, of the stars. The Sun’s transit of the line is what divides the day into its am (antemeridian) and pm (postmeridian) halves. Observations of the Moon and stars taken at the Royal Observatory and published in its Nautical Almanac held thousands of ships on their proper courses from the mid-18th century onward. They all calculated their position with regard to the Greenwich meridian. Later, the zero-degree meridian line at Greenwich helped the railroads, too. When the rise of rapid rail transit in the late 19th century divided the world into time zones, delegates at the 1884 International Meridian Conference declared Greenwich to be the Prime Meridian of the World — the designated starting line for all measures of time and place. The globe, like some giant orange, was sectioned into 24 equal ‘lunes’ or crescents, each one a time zone differing by one hour from its contiguous companions. Today’s divisions look a lot less equitable, with all of Europe lumped together for commerce’s sake, and jagged lines up and down the Pacific Ocean indicating how island nations align themselves with the mainland. Although trains begat time zones in the first place, the Trans-Siberian railroad cleaves to Moscow time all the way cross-country to Vladivostok on the far east coast of Russia, a distance spanning eight time zones. All the world’s longitudes continue to be calculated east or west of Greenwich, but the actual meridian line of zero degrees longitude no longer coincides with the much-photographed steel-trimmed strip in the observatory’s stone-paved courtyard. It lies some 100 yards to the east, where it runs into a Greenwich Park rubbish bin. This inadvertent relocation occurred in 1984, with the launch of the US Global Positioning System. The GPS developers had intended to match the system’s reference point to the Greenwich meridian, out of respect for tradition (to say nothing of a plethora of existing maps) but tiny technical errors — still not entirely understood, according to Stephen Malys, senior scientist for geodesy and geophysics at the National Geospatial-Intelligence Agency in Springfield, Virginia — crept into the calculations. Perhaps even more of an affront to British pride than the misplaced meridian is the fact that Greenwich Mean Time (GMT) is no longer the world standard. GMT fell out of official favour in the 1920s, for semantic reasons. For generations, Greenwich astronomers, not wanting to interrupt a night’s work by a change of date, had been advancing their calendars at noon. When they at last conformed to what the public perceived as the start of a new day at midnight, GMT meant two times that differed by 12 hours and a calendar page. That is, 16:00 on Tuesday could be mistaken for 04:00 on Wednesday. To avoid confusion, the International Astronomical Union urged its members to cease all reference to GMT after 1928. Enforcing this prescription has proved difficult, however, and the term lives on. People all over the world say GMT when they really mean UTC, or aren’t sure what they mean. But UTC is not simply a new coinage for the old GMT, because UTC is founded on the modern atomic-frequency definition of a second. Numerous other timescales coexist alongside the diehard GMT, the triumphant UTC, and the BIPM’s TAI. These additional scales are maintained for specialised purposes — for astronomy, for example, for navigation and for telecommunication. Some of them ignore the Earth’s rotation and rely instead on its revolution (assessing its orbital position via very-long-baseline interferometry with radio telescopes pointed at quasars in deep space). The various timescale names are sometimes misused even by expert horologists. A partial list includes GPS time, Ephemeris Time (ET), and relativistic timescales consistent with Albert Einstein’s general theory of relativity — Terrestrial Dynamical Time (TDT or TT), Barycentric Dynamical Time (TDB), Geocentric Co‑ordinate Time (TCG) and Barycentric Co-ordinate Time (TCB) — as well as the uncoordinated brethren of UTC, known as UT0 (zero), UT1 and UT2. These last three versions of universal time, though related to the Earth’s rotation, do not increase by leap seconds. For several years before the birth of the leap second, time authorities assured the agreement of UTC noon and Greenwich noon with an elastic approach — repeatedly altering the frequency and offset of time broadcasts relative to atomic standards. In 1972 authorities adopted the more straightforward approach of using a constant frequency coupled with uniform leap-second additions as needed. But by 1992 the leap second had come under scrutiny by critics who rued any sort of intrusion in the flow of time. Another 20 years have hardened the arguments pro and con. Opponents of leap seconds cite the prospect of ever more frequent insertions and their attendant (unknown) costs. They also see the spectre of potential catastrophe lurking in a leap second’s elapse. Defenders, for their part, can point to the leap second’s four-decade safety record: despite fears voiced before the first insertion — ‘Planes will crash!’ — no such disaster has yet marred the intercalation of a leap second. If you could disconnect yourself from the world long enough — a month, perhaps — you could discover your body’s internal time Over and over in the Charlottesville discussions, I heard the phrase: ‘If it ain’t broke, don’t fix it’ emerge from the mouths of speakers who would otherwise be loath to say ‘ain’t’. Fearing that changing the nature of UTC might introduce new problems that no one has anticipated, the majority of delegates favoured maintaining the current system of civil time But the decision is not theirs to make. The fate of the leap second rests with the Radiocommunication Sector of the International Telecommunication Union (ITU-R), a specialised agency of the United Nations with headquarters in Geneva. The last time the ITU-R’s subject expert members met to vote on the momentous question of the leap second, in 2012, they decided not to decide, and postponed the decision to 2015. Doing away with the leap second means decoupling civil time from the Earth’s rotation — something that has never been attempted by any known civilisation. The demise of UTC as we know it would alter the meaning of ‘day’ that underpins legal, cultural and religious practices the world over. UTC would no longer be ‘coordinated’ in the same fashion. Although it would take a few thousand years for Saturday to fall on Sunday, that event would loom on the horizon. Much as I would hate to let go of the leap second, I can see its demise as another step in the bending of time to human purpose. In the future, long after our methods of timekeeping have abandoned any connection with solar time, our body clocks might retain the only vestige of it. Inside the human brain, a master clock called the suprachiasmatic nucleus drives the circadian rhythm of everyday functions such as sleeping and waking. In real life, these rhythms yield to the influence of time cues, from the rising and setting of the Sun to the press of social obligations. But underneath the schedules dictated by office hours, classroom periods, mealtimes, bedtimes and the like, an ancient rhythm thrums, loosely tied to the Earth’s rotation. If you could disconnect yourself from the world long enough — a month, perhaps — you could discover your body’s internal time for yourself. I had that opportunity some years ago, as a volunteer subject in a research project at the Montefiore Medical Centre in New York, funded by the US National Institutes of Health and designed to monitor behaviour in the absence of normal time cues. The researchers hoped to discover fundamental truths about human circadian rhythms that might explain why certain medical treatments proved alternately helpful or harmful depending on the time of day they were administered. I spent several weeks in the laboratory’s small, soundproofed apartment, where the boarded-up windows kept me from knowing whether it was day or night outside. Though I lived by my own internal schedule, separated from everyone I knew, I was not isolated. Attendant technicians came and went, using my waking hours to test my alertness, monitor my temperature and collect frequent samples of bodily fluids. When I declared the end to a day, they wired me with electrodes to chart the stages of sleep and dreaming through my ‘night,’ which fell when I turned out the overhead light. I never knew what time it was. The truth seemed tragic to me then: I belonged to a 25-hour species trapped on a 24-hour planet I reacted as every other subject had when tested in this bizarre environment: I adopted a 25-hour day, which I achieved by unwittingly going to bed an hour later every night. Aside from the fact that I felt as confined and observed as any laboratory guinea pig, I had what others often cry for — an extra hour in the day. But I didn’t know it, because I never got to compare the start and end of my day with the outside world. The technicians preserved the integrity of my private time‑reference frame by removing their wristwatches and refusing to allude to time. A sign posted for their benefit outside my door reminded them: ‘Do not say Good Morning. Say Hello.’ I felt as normal as one could feel under the circumstances. When the experiment ended, I was shocked to see how far I had drifted from clock time. The truth seemed tragic to me then: I belonged to a 25-hour species trapped on a 24-hour planet. And yet, as I’ve come to accept, the 25‑hour rhythm coupled with the ability to adjust to ambient time cues confers an adaptive advantage. All of the Earth’s creatures possess a circadian rhythm somewhere in the neighbourhood of 24 hours, give or take an hour. The flexibility allows us to cope with changing times. It lets the scientists who remotely control the Mars exploration rovers adopt the Martian sol — which lasts half an hour longer than an Earth day — as their time frame, even while they continue to live and work in Earthly surroundings. On the final day of the Charlottesville conference, the organisers escorted our group to nearby Monticello, the former house and gardens of Thomas Jefferson, now a museum. We all wanted to see the sundial that the third president had designed for his residence, and also his great seven-day clock with its weights and chimes running through holes in the entrance hall floor. As the name Monticello suggests, the site tops a small mountain. We drove up to it through wooded hills, turned many shades of green by the rainy spring. As we got out of the cars, a loud sound assailed us. A buzzing, whining, hum filled the air and caught everyone’s attention. It came from the masses of cicadas that could be seen on every tree. Recently roused from their species-specific 17-year sleep by some evolved biological alarm, the cicadas had awakened to find mates and procreate in a burst of song. Their frenzy bespoke some insect zeitgeber, insensible to us, that tuned the tiny creatures to this sunny day. Jefferson’s sundial, of spherical design, recalled the era when local solar apparent time sufficed to coordinate all the activities on his estate. The hour shown on his dial would have differed a bit from the time on his neighbours’ farms to the east or west, but with no consequences to anyone. Inside the manor house, the great mechanical clock with its many moving parts told a different kind of time — the more consistent ‘mean solar time’, that ran as much as 15 minutes ahead or behind of the sundial, depending on the season. Daylight Saving Time, though proposed by Jefferson’s great friend Benjamin Franklin, in 1784, still lay more than a century away from implementation. At the end of the house tour, I walked through Jefferson’s vegetable gardens, past the small cemetery where he lies buried, and on down a forest path toward the parking lot. The sound of the cicadas seemed to reach its crescendo here. For a moment, I wondered how their noisy trilling might resonate with the frequency of caesium. | Dava Sobel | https://aeon.co//essays/what-would-happen-if-clock-time-no-longer-tracked-the-sun | |
Dance and theatre | Life on the road with a rock band: memories blur, cities blend. Only in the frenzy of performance does the world pause | I’ve just returned home after a long tour. I’m in a band, one of those travelling circuses of night salesmen, and we ply our shows all over the world. When we’re offered a gig, we usually say yes, as much for the experience as for the money. One day, presumably, people will stop asking. In the meantime, the long months away make the cities and sensory impressions blurry. Musicians on the road make up a kind of parallel world, criss-crossing each other all over the planet. We tend to eye one another warily in backstage rooms and festival catering tents, like co-workers meeting somewhere unsavoury. Our lives are familiar, and yet we hurry past, too close to our peers for real comfort. Outside this parallel world, I rarely meet people who travel as much as I do, or in quite the same manner: which is to say always and quickly, seeing very little. Who would choose to? On this last tour, we passed through dozens of cities in Australia and Asia, cities with exotic names that many people can only dream of, and boring ones, too. Plenty of distant places are just as mundane as home, but still their diversity is flattened out by the brutal efficiency of our schedule. We have a single night to get to know a place, and only sometimes the morning, before we hit the road again. I skip through time zones, caught in an impossible pursuit — to be everywhere at once. It’s strange, but travel teaches me more about time than it does about place. For instance: you can cover a lot of ground in a month’s time. The sheer density of minutes in a day is staggering. You can wake up in Kuala Lumpur and then rest your head 220 miles away in Singapore. You can begin a day travelling by taxi in Indonesia, with a driver who would take you to the Moon for $3, and finish it waiting in the rain for yellow cab at JFK airport, where all the money you have left will hardly get you to Manhattan. Touring isn’t an extravagance — live shows are how we get by. We’ve been a band for ten years, and never, in the wobbly arc of our career, have record sales come even close to covering our food and rent. I can’t speak for everyone, but this seems about par for the course for those in the middling-celebrity strata of the indie music hierarchy. It’s much worse for the basement bands and the upstarts. When you’re moving quickly, the only things that appear immobile are the ones moving with you Occasionally, things come along that afford us some wiggle room: maybe a festival paycheck, or a spot on television or in a movie. But mostly it’s the workaday trudge of tour that sustains us. And in the vagaries of a creative life, it’s the tour that feels the most like a job. We clock in, load up, fulfil our contracts, sign paperwork backstage and, hopefully, some records at the merchandise table. We keep receipts, we book flights, we pay taxes. Our nights don’t end with groupies or lines of cocaine. My greatest indulgence is a beer sucked down in front of the TV in a hotel room as I struggle to catch up with the programmes that mark the passage of time in other people’s lives. When I drift off, I have a recurring dream in which I gaze out the window of a passenger van at a landscape shifting too quickly to discern. All I see is mist, a space eaten by time. When you’re moving quickly, the only things that appear immobile are the ones moving with you: that’s basic relativity. It’s as though you’re standing still, or at best slowly tugging a suitcase, while the world spins around you. I focus with manic intensity on my personal effects: my phone, my jacket, the nylon sleeve that holds my passport. Bands on tour will often refer to their van as their home. They’re not exaggerating. It’s one way to keep sane as time zones shift around your body, and languages, too. Any constant can become a kind of home, even the music itself. My technique is to buy things everywhere I go, not as souvenirs but because I’m trying to weigh myself down on to the world. My suitcase is an albatross of Scandinavian toothpastes and Japanese notebooks. Perhaps these purchases will help me to remember something other than a blurry view from a car window, even if it is just the memory of the moment I bought them. It’s hard to look beyond my artificial bubbles of stillness, and even harder to imagine that, as I write this, chugging caffeine at home in California, the ferry still shuttles across Kowloon Bay, crossing wakes with the last remaining junk boats in Hong Kong harbour. Or that those temple macaques in Bali still sell each other out for a chance at some peanuts. That the curries still simmer, money still continues to change hands, and all the worshippers still pray to their gods. How can it all go on even when I’m not there looking at it? This thought has struck me, like a spasm, many times, especially in large cities. I find it paralysing and magnificent in equal measure. I felt nothing, except for self-consciousness and the impulse to snap a dozen pictures I haven’t looked at since In touring the world I’ve only ever sliced person-sized cross-sections through a massive simultaneity of experience. Like cuts in flesh, they heal up behind me, save for a few scars here and there where I might have managed to make contact, or where I ended up in the background of someone else’s holiday snapshot. On some scale, this is true for everyone. ‘What is life,’ wrote George Satayana in his essay ‘The Philosophy of Travel’ (1964), ‘but a form of motion and a journey through a foreign world?’ Friends often ask which were my favourite places to visit, but the truth is I can’t hold them all in my mind. What makes a place nice to visit, anyway? The pleasure it provides for its visitors? Who am I that Thailand must delight me? I’m horrified when a country is described as having a ‘warm people’, as though each citizen must please the sweaty strangers who choke the streets. Thailand — or any other place — can only exist. And by existing can only remind the traveller that other modalities are possible, that no way of living is a natural consequence of being alive on this planet. That should be enough. Travel is inherently narcissistic. Even if we’re looking to be knocked off our axis, we’re still in the business of self-improvement. People want to go to faraway places and return changed. A lot rides on this expectation. We hunt for perspective, for miraculous connections, but when these moments happen, we don’t always recognise them — or we look in the wrong places. There is a collection of jungle villages around Ubud on the Indonesian island of Bali, which is as remote and humid and disorienting as any foreign place. The landscape is clogged with temples spewing incense, and yet long lines of Western tourists snake out the doorway of the single mountain temple that featured in Elizabeth Gilbert’s book Eat, Pray, Love (2006). It’s easy to laugh at these people. It’s easy to say that they are missing the point, but are they? Maybe they’re just mainlining into the essence of what travel is always already about: pat revelations about the self. When we were in Bali, we went to a different temple, and our dirty tennis shoes looked ridiculous beneath the stiff embroidered sarongs we were commanded to wear. I felt nothing, except for self-consciousness and the impulse to snap a dozen pictures I haven’t looked at since. The strangest dissonance of this life is the uneasy balance we strike between chaos and routine. During our shows, we throw the force of our energy into manifesting an unbridled, spirited experience for the audience. They expect this, every night. They’ve paid good money, and, like any travellers, hope to return home with a story. It’s not quite the mountain temple, but to the ecosystem of acolytes that emerge at our shows, the band is a vehicle for enlightenment, or, at the very least, experience. To admit that we perform in other cities violates the terms of an unspoken covenant, in which we exist solely for them, and they for us The concert is a finite moment in time. It has intent. The band is summoned to the proscenium; any implication of our grubby tour van, any residue from last night’s show, any evidence of our existence outside the stage is erased by the sheer collective will of the crowd. We are rendered placeless, timeless, without context. It’s taken me 10 years on the road to realise this. Just as it boggles me to imagine that the places I’ve visited continue to exist in my absence, the audiences we play for firmly believe that we are ephemeral. To admit that we perform in other cities, night after night, violates the terms of an unspoken covenant in which we exist solely for them and they for us. But we must keep moving, and so must they, countless trajectories of human life careening outwards from the moment the lights go down. A concert, in this sense, isn’t kinetic: it’s stillness itself, a world briefly paused. One of my fondest tour memories is going to the movies in North Platte, Nebraska. After weeks of beer-stale rock clubs, only such an everyday pleasure can reset your barometer to normal. Suddenly, everything can become magic again. At the end of the night, the cashier — a small-town misfit, not unlike those I’ve met in countless venues from Riga to Xi’an — gave us a garbage bag filled with the theatre’s leftover popcorn. He didn’t know us; he just knew we weren’t from North Platte, and that was enough for him. The popcorn was stale and cold and buttery, and the town was closed for the night. We lugged the bag, nearly the size of a person, to a moonlit field out back. Like teenagers, we tossed it at one another in great handfuls under the wide, feathery sky, before emptying the whole bag on the grass for the birds to eat. The next morning it was gone, and so were we. | Claire L Evans | https://aeon.co//essays/the-world-blurs-when-i-m-on-tour-with-the-band | |
Cities | Reclaiming the streets through civic participation does more than change the city: it creates citizens | We’d gathered by the ticket booth outside the South Bank Centre in London at 6.30pm. It was the last Friday in May but there was a slight chill in the air, so we clustered in groups — couples sticking together, friends facing in towards each other. Those who’d arrived alone looked around, checking cell phones and waiting. One man had taken the title of the event — a ‘Mini Midnight Run’ — literally, and wore only jogging gear. He hadn’t read his notes, which explained that proceedings would end at 2.30am. Another man, a young American backpacker, had stumbled upon the listing by chance and wanted to do something ‘epic’ on his last night in London, ‘before heading to Europe’. Like most of the others, I looked as if I had just come from the office, but I’d changed into trainers and pulled on a thick coat, expecting the worst of the weather. Unsure what lay ahead, we enjoyed not knowing. Over the past few years, the young Nigerian-born artist and poet Inua Ellams has been leading impromptu ‘runs’ around London by night, searching the streets for alternative stories and new configurations. On his blog, he writes that the project originated in serendipity: One autumn evening, in 2005, a friend and I lost patience waiting for a bus, and on a whim decided to walk the bus’s route. Six hours later, we’d drifted across London from Battersea to Chelsea, Victoria, Vauxhall, the West End into the small hours of the morning… surprised at how fresh and energised we felt, marvelling at the deserted streets of the city, without its hustle and bustle.Since then, Ellams has organised ‘Runs’ in London and, recently, Barcelona, with troupes of artists, performers and ordinary participants such as myself, the urban drift punctuated by games, tasks and challenges. On our evening, however, his first task was to turn us from an atomised cluster into a group. Commandeered into closely arranged lines, we introduced ourselves to each other. We played a team game to give us a sense of common purpose, and sang together in public, in front of the groups drinking outside the Royal Festival Hall. Then we set off, to ‘reclaim the city’. As we walked, we began to talk to each other; newly bonded through song, we went in search of what we had in common, swinging across the river and plunging into darkening streets. On one quiet corner, behind the Savoy Hotel, we played another game. In a paved precinct off St Martin’s Lane, we devised short plays that we performed for one another. There was something exhilarating and liberating in shedding our inhibitions, allowing ourselves to explore spaces that we commonly crossed without consideration, and to enjoy the company of strangers. By 1.30am, we had found our way to Chinatown, radiant with neon glare, and still thronged with strings of bodies weaving through the streets. It was time to sit down and take stock with bowls of noodle soup. Here, Ellams explained why he conducted these tours, most of which run all night, from 6.30pm to 6.30am. There was something special about watching the city transformed by night and then seeing dawn bring life and light back to the empty streets, he told me. Others who had been on previous runs spoke of sublime moments — an exquisite recital outside a Tube station; or a sense of community-building, arising out of a game of ‘bag-ball’ played on a grassy plot that was normally out of bounds. By using an urban place against the grain of common practice, one owned it — albeit for the briefest moment The purpose of the Midnight Run, it seemed to me, was about reconnecting with other people — as well as our civic selves and the often fleeting ties that make up everyday metropolitan life. It is so easy to forget that cities are brilliant at bringing people together, forcing them to interact and to benefit from one another. Walking back across the bridge with the American backpacker after the run, we talked about the evening’s unexpected joys. We’d spent six hours traversing streets I knew far too well — but only as background scenery to a hectic metropolitan life. Though my attitude to these places was unchanged, I’d been reminded of something important about the city in which I live: for a moment, I’d been opened to chance encounters and relationships. As my new companion and I walked, we agreed that while we might never see our fellow runners again, it had been good to meet them, to know them for an evening. And this, we felt, said something about the power of the city. Later, Ellams told me how he’d taken a group of 80-year-old Londoners on a run that revealed that, for the elderly, the city can be a territory of no-go areas, hidden threats and restrictions. The octogenarian ‘runners’ were members of the Entelechy Arts group in south-east London, run by the artistic director David Slater. As they moved through the city, they had talked about their fears — how places no longer seemed open, how freedoms had been lost. At the Royal Society of Arts, they ‘collected data’ — the phrase is Slater’s — through exercises, games and conversation, and ‘played with themes of place, ownership, citizenship, and social change’. But mostly, Slater told me, ‘we just had fun’. Stories were told and written on a wall with marker pens, and on the way home, Slater said, ‘imaginations [were] firing’. When I awoke on the morning after my run, Istanbul dominated the news. Police had used tear gas and water cannons to disperse a crowd who’d gathered in Taksim Square to protest its redevelopment as a shopping precinct. The project had been given the green light by the city and the government, but locals were concerned that one of the few central places where people could congregate outdoors was being given over to yet more shops, ridding the city of the trees under whose branches they had enjoyed the shade. Later that weekend, a larger gathering formed, this time protesting the heavy-handed treatment meted out by the police. Amid the crowd, a couple danced a tango, entwined and poised, their faces covered in gas masks. While my pleasurable nocturnal adventure in London had none of the significance of that balletic dance in Taksim Square, I discerned a slender connecting thread: both actions understood that by using an urban place against the grain of common practice, one owned it — albeit for the briefest moment. For me, it was a reminder of what I had lost in my everyday use of the city: a sense of wonder, an openness to serendipity. For the dancers, it was a symbol of a steadfast refusal to let something go, to tighten the grip and find the connection between the place and the dance. Henri Lefebvre, the French Marxist philosopher (and most famous taxi driver in Paris), observed in the late 1960s that there are no neutral places in the city; that the different threads of power find their way into every crack of the metropolis, constructing a cartography of exclusions and barriers. This map can sometimes be so ubiquitous that it appears invisible. Yet it is manifest in the ways that a city is divided into zones, neighbourhoods and no-go areas. For Lefebvre, the city was both the problem and the solution to the quandaries of our everyday lives. Within this political perspective, the people have a common right to utilise city space without restriction. Lefebvre argues that viewing those spaces as the theatre for everyday life changes our sense of belonging: being part of the city is no longer determined by ownership or wealth, but by participation. In consequence, our actions change and refine the city. This idea, which Lefebvre called ‘the right to the city’ and developed in his influential book Le Droit à la Ville (1968), remains a potent hope. It was reinforced in the autumn of 2011 at Occupy camps around the world, when ordinary people transformed urban spaces through direct action. That hope is also present in numerous campaigns (in Bogotá, or Rio de Janeiro, for example) that stress the necessity of efficient bus routes to a city’s poorest parts, and in studies of the unequal distribution of health provision, education, green spaces, and housing. It was this kind of inequality, initiated by a rise in bus fares, that inspired the recent protests in Brazil. There are few places left in the city where you can sit down without first having to buy a coffee Reclaiming the city can take many forms. For example, three weeks after the gas-masked tango, after running battles, police violence and threatening rhetoric, another portrait emerged from Taksim Square. On 17 June, following a weekend in which the police had succeeded in clearing the square, yet while the stink of tear gas still hung in the air, the performance artist Erdem Gunduz showed up at 6pm and stayed, stock-still, facing the Atatürk Cultural Centre, with his hands in his pockets. He remained there for eight hours; by 2am — when police moved in — he had been joined by 300 people. His silent protest seized the space back from the authorities without saying a word. As the geographer David Harvey notes in his book Rebel Cities (2012), the creation of common spaces within the city — public areas where we can congregate without fear, or without the constant demands of the market — is stalling. Enclosed places free of CCTV, private security, Starbucks, gates, or regulations are becoming increasingly rare. The privatisation of public space is proliferating and is often too subtle to notice. In 2007, the San Francisco art-activist group Rebar tested the openness of various privately owned public spaces in the city, such as courtyards and gardens in front of office blocks. In a series of organised ‘paraperformances’ — group yoga sessions, flying kites, chanting — Rebar discovered how public these spaces really were. Often, within minutes of starting up, the activists were confronted by building managers and private security staff and told to stop. As Harvey notes, it is not enough to define a place as a common ground, because it will always be contested. Rather, the preservation of the commons is a continuous act, a process of stating and restating the public use of a place. We have lost many of the public spaces of the city without knowing it. As the journalist Anna Minton notes in Ground Control: Fear and Happiness in the 21st-century City (2009), the ‘urban renaissance’ of many city centres has resulted in regenerated zones ‘designed purely with shopping and leisure in mind’. These new ‘malls without walls’ are in the hands of private owners who want maximum returns on their investment and, as a result, extinguish much that makes the city human. There are few places left in the city where you can sit down without first having to buy a coffee. This idea of owning public spaces is complex. The economist Garrett Hardin, whose essay ‘The Tragedy of the Commons’ (1968) was published the same year as Lefebvre’s influential work, said that, left to their own devices, individuals are mostly likely to behave selfishly and abuse common property. In contrast, in Governing the Commons (1990), the American political economist Elinor Ostrom showed that the commons was not a zero-sum game. It led to the development of far better systems of management, compared with private ownership or government control. Yet, as David Harvey reminds us, this ownership arises not just from contracts or negotiations, but from action. The reclamation of civic space does more than change the city: it creates citizens. Through violent and defiant protests, provocative performances, citizen action, even unsolicited horticulture, the battle for civic space continues to reinvent itself. Sometimes, the action starts in reaction to the state. Other times, it kicks off because the powers-that-be are too slow to react to events, and local residents or campaigners take matters into their own hands, taking the urban domain to be a common realm rather than ‘someone else’s problem’. The battle cry can be strident. The SlutWalk movement began life in Toronto in April 2011, after a local police officer named Michael Sanguinetti, advising students on personal safety, said: ‘women should avoid dressing like sluts in order not to be victimised’. Organised in anger, the original event saw women take to the streets in fishnets and bras, waving banners that read: ‘I’m a slut, don’t assault me’, or ‘Slut Pride’. The protest sparked copycat walks around the world. Reclamation of the city begins with the realisation that ‘that’ place, whatever its problems, is in fact ‘our’ place The Better Block campaign in the US couldn’t be more different. Set up in April 2012, when local activists and planners decided to do something for themselves, Better Block focuses on improving a single block in an underused neighbourhood and making it more people-friendly by providing cycle lanes, trees, lighting and pop-up businesses such as cafés with outdoor seating. Better Block are not alone: Depave, a group from Portland, Oregon, tears up unwanted asphalt and returns the land to gardens; the Edible Bus Stop in south London is a guerrilla gardening group that turns neglected spaces into vital green oases. But what these groups all have in common is an understanding that place and action are connected. More; they demonstrate how actions can potentially become moments of transformation. The demand for common ownership of a space requires a certain type of citizen who is willing to roll her sleeves up and muck in. This is what the sociologist Richard Sennett so eloquently identifies in his book Together: The Rituals, Pleasures and Politics of Co-operation (2012); co-operation is a skill that we have to learn and constantly practise. It’s not easy, but it is worth the trouble. I found a surprising example of this valuable lesson when I was travelling in Bangalore in March 2012. The Ugly Indian is an anonymous collective of successful business people from the IT and financial services industries who were fed up waiting for local government to make their city clean and liveable. Stealthily, they’d taken over control of a quiet residential street in a tiny neighbourhood, and readied themselves for a makeover. When I arrived, volunteer workers from a nearby office block were already busy, using gloves, pickaxes and shovels to move piles of rubble and rubbish, overflowing from the pavement onto the road. One of the office managers, pausing for a cigarette, admitted that for many of these educated desk-jobbers this was a first taste of manual labour. As they scrapped and shovelled in their weekend gear and business shoes, surgical face-masks protecting them from the dust, they were already starting to make a difference. They’d cleared and cemented a new, even pavement and a well-crafted curb. Now they started to talk about planting bushes and flowers. For the Ugly Indian, the solution begins with ‘the 50 feet in front of your house or your office’. The local authorities were clearly not about to change, but this was no reason ordinary citizens should not attend to the city themselves — with ‘no lectures, no moralising, no activism, no self-righteous anger, no confrontation, no arguments, no debates, no pamphlets, no advocacy’. The project promised its volunteers nothing more than a morning or two of hard graft. Nonetheless, as one Ugly Indian told me as we sipped tea poured from a canister on the back of the bike of the chai wallah, the simple act of moving earth, clearing rubble and turning cement ‘sensitises’ the group to its neighbourhood. It shows ordinary citizens how, just by doing something, ‘that’ problem turns into ‘our’ problem, and eventually into ‘our neighbourhood’. ‘Many people ask us how they can set up their own Ugly Indian group,’ the group’s organiser told me. But ‘I tell them that they don’t need to, they just need to get on with it.’ And this is what connects the very different events found in Taksim Square and Occupy, at a bus stop in south London or on a street in Bangalore. Reclamation of the city begins with the realisation that ‘that’ place, whatever its problems, is in fact ‘our’ place. By reclaiming it, we might actually find that we possess the solution, and in the process, we might just change ourselves. | Leo Hollis | https://aeon.co//essays/cities-thrive-when-public-space-is-open-to-all | |
Cognition and intelligence | Shyness is a part of being human. The world would be a more insipid, less creative place without it | If I had to describe being shy, I’d say it was like coming late to a party when everyone else is about three glasses in. All human interaction, if it is to develop from small talk into meaningful conversation, draws on shared knowledge and tacit understandings. But if you’re shy, it feels like you just nipped out of the room when they handed out this information. W Compton Leith, a reclusive curator at the British Museum whose book Apologia Diffidentis (1908) is a pioneering anthropology of shy people, wrote that ‘they go through life like persons afflicted with a partial deafness; between them and the happier world there is as it were a crystalline wall which the pleasant low voices of confidence can never traverse’. Shyness has no logic: it impinges randomly on certain areas of my life and not others. What for most people is the biggest social fear of all, public speaking, I find fairly easy. Lecturing is a performance that allows me simply to impersonate a ‘normal’, working human being. Q&As, however, are another matter: there the performance ends and I will be found out. That left-field question from the audience, followed by brain-freeze and a calamitous attempt at an answer that ties itself up in tortured syntax and dissolves into terrifying silence. Though this rarely happens to me in real life, it has occurred often enough to fuel my catastrophising imagination. The historian Theodore Zeldin once wondered how different the history of the world might seem if you told it, not through the story of war, politics or economics, but through the development of emotions. ‘One way of tackling it might be to write the history of shyness,’ he mused. ‘Nations may be unable to avoid fighting each other because of the myths and paranoias that separate them: shyness is one of the counterparts to these barriers on an individual level.’ The history of shyness might well make a fascinating research project, but it would be hellishly difficult to write. Shyness is by its nature a subjective, nebulous state that leaves little concrete evidence behind, if only because people are often too uncomfortable with their shyness to speak or write about it. For Charles Darwin, this ‘odd state of mind’ was one of the great puzzles in his theory of evolution, for it appeared to offer no benefit to our species. However, in research begun in the 1970s, the Harvard psychologist Jerome Kagan suggested that about 10-15 per cent of infants are ‘born shy’. Being easily fearful and less socially responsive, they reacted to mildly stressful situations with a quicker heartbeat and higher blood cortisol levels. At around the same time, the American animal behaviourist Stephen Suomi, working at an animal centre in Poolesville, Maryland, observed a similar percentage of shyness in monkeys, with the same increased heart rate and rise in blood cortisol. Blood testing, and reassigning shy infant monkeys to outgoing mothers, suggested that this shy trait was hereditary. Suomi’s work might also have inadvertently pointed to the evolutionary usefulness of shyness. When a hole in the chain-link fencing around the centre’s primate range gave the monkeys a chance to get out, the shy ones stayed put while the bolder ones escaped, only to be hit by a truck when they tried to cross the road. Until a few hundred years ago, life was lived far more in public: whole families would eat, sleep and socialise together in the same room Higher primates are social creatures, hard-wired to want to meet and mate; but there might also be some value in their being cautious and risk-avoiding, traits that might over-evolve into excessive timidity. Neither Kagan nor Suomi suggest that shyness is fixed at birth. They see it as a case study in the rich interplay between nature and nurture. Similarly, for Antonio Damasio, professor of neuroscience at the University of Southern California, shyness is a ‘secondary emotion’. Unlike primary emotions such as anger, fear and disgust — where there is a large biological and universally felt component — shyness is ‘tuned by experience’, leaving it open to a huge amount of cultural conditioning, historical variation and definitional ambiguity. If shyness is something that adjusts to different cultural and historical contexts, then it must surely have taken on oppressive new forms with the emergence of modern notions of privacy and private life. Until a few hundred years ago, life was lived far more in public. For example, it was quite normal for people to urinate or defecate in public places. Even in private houses, whole families would eat, sleep and socialise together in the same room. Then, gradually, bodily functions and aggressive language and behaviour were rendered increasingly invisible in polite society, thanks to what the late sociologist Norbert Elias called the ‘civilising process’ that took place in the Western world from the 16th century onwards. As greater physical and psychological boundaries grew up around individuals, particularly among relative strangers in public, there were more opportunities for awkwardness and embarrassment about when these boundaries should be crossed. More recently, shyness, like other awkward personality traits, has been seen as an affliction to be treated medically rather than as a temperamental quirk. In 1971, the psychologist Philip Zimbardo conducted the Stanford Prison Experiment, with student volunteers acting as prisoners and guards in a pretend prison in the basement of the Stanford University psychology building. The study had to be stopped a week early because the guards were treating the prisoners so brutally, and many of the inmates had adapted by internalising their subordinate positions and sheepishly obeying their tormentors. Zimbardo began thinking of shy people as incarcerating themselves in a silent prison, in which they also acted as their own guards, setting severe constraints on their speech and behaviour that were self-imposed although they felt involuntary. In 1972, Zimbardo began conducting the Stanford Shyness Survey, starting with his own students and eventually including more than 10,000 interviewees. The odd thing about Zimbardo’s work was that it revealed that feeling shy was very common — more than 80 per cent of those interviewed said they had been shy at some point in their lives, and more than 40 per cent said they were currently shy — but that it also pioneered the modern tendency to see shyness as a remediable pathology. Methods of calibrating shyness were developed, such as the Cheek and Buss Shyness Scale (after its Wellesley College researchers Jonathan Cheek and Arnold Buss) in 1981, and the Social Reticence Scale, formulated by the psychologists Warren Jones and Dan Russell in 1982. Extreme shyness was redefined as ‘social anxiety disorder’, and drugs such as Seroxat (also known as Paxil), which works like Prozac by increasing the brain’s levels of serotonin, were developed to treat it. As Christopher Lane argues forcefully in his book Shyness: How Normal Behaviour Became a Sickness (2007), this was part of a more general biomedical turn in psychiatry, with its ‘growing consensus that traits once attributed to mavericks, sceptics, or mere introverts are psychiatric disorders that drugs should eliminate’. A small, self-regarding part of me thinks there is something glib about easy articulacy and social skill In 1999, noting that the number of people identifying as shy in his survey had risen to 60 per cent, Zimbardo told the British Psychological Society that we were on the cusp of ‘a new ice age’ of non-communication. Computers, email and the replacement of cashiers and shop assistants by cashpoint machines and automated checkouts were all contributing to what he called an ‘epidemic’ of shyness as the possibilities for human contact diminished. Shyness, he suggested, was no longer an individual problem; it was now a ‘social disease’. Today Zimbardo’s prediction of a new ice age created by technology seems wide of the mark. On the contrary, the rise of social networking has made it normal for people to lay bare their private lives without inhibition online, from posting photos of themselves in states of inebriation to updating the world on their changing relationship status, in ways that would have seemed inconceivable a generation ago. The internet, far from cutting us off from each other, has simply provided more fodder for our own era’s fascination with emotional authenticity and therapeutic self-expression — a shift in public attitudes towards personal privacy that Eva Illouz, professor of sociology at the Hebrew University in Jerusalem, has called ‘the transformation of the public sphere into an arena for the exposition of private life’. In her recent book Quiet: The Power of Introverts in a World That Can’t Stop Talking (2012), Susan Cain worries about a world ruled by what she calls the ‘extrovert ideal’. This, she suggests, found its most malign expression in the excessive risk-taking of those who brought about the banking crisis of 2008. Much of Quiet consists of telling introverts how wonderful they are: how we think more deeply and concentrate better than extroverts, are less bothered about money and status, are more sensitive, moral, altruistic, clear-sighted and persistent. If you’re an extrovert, the book probably isn’t for you. Yet introversion is not the same as shyness, as Cain is careful to point out, although the two do often overlap. Introverts are people whose brains are overstimulated when in contact with too many other human beings for too long — in which case I am most definitely a shy introvert. If I’m in a noisy group of people for more than about an hour, my brain simply starts to scramble like a computer with a system error, and I end up feeling mentally and physically drained. Introverts such as me need to make frequent strategic withdrawals from social life in order to process and make sense of our experiences. Shyness is something different: a longing for connection with other people which is foiled by fear and awkwardness. The danger in simply accepting it, as Cain urges us to do with introversion, is that shyness can easily turn into a self-fulfilling persona — the pose becomes part of you, like a mask that melds with your face. There is always something we cling to in an unhappy situation that stops us escaping from it. In my case, it is the belief that lots of voluble people do not really listen to each other, that they simply exchange words as though they were pinging them over a tennis net — conducting their social life entirely on its surface. A small, self-regarding part of me thinks there is something glib about easy articulacy and social skill. The human brain is the most complex object we know, and the journey from one brain to another is surely the most difficult My more sensible self realises this is nonsense, and that shyness (or, for that matter, non-shyness) has no inherent meaning. There is nothing specific to shyness that makes you more likely to be a nice person, or a good listener, or a deep thinker. Shyness might have certain accidental compensations — being less susceptible to groupthink and more able to examine the habits and rituals of social life with a certain wry detachment, perhaps. Mostly it is just a pain and a burden. Yet shyness remains a part of being human, and the world would be a more insipid, less creative place without it. As Cain argues, we live in a culture that values dialogue as an ultimate ideal, an end in itself, unburdening ourselves to each other in ever louder voices without necessarily communicating any better. Shyness reminds us that all human interaction is fraught with ambiguity, and that insecurity and self-doubt are natural, because we are all ultimately inaccessible to one another. The human brain is the most complex object we know, and the journey from one brain to another is surely the most difficult. Every attempt at communication is a leap into the dark, with no guarantee that we will be understood or even heard by anyone else. Given this obdurate fact, a little shyness around each other is understandable. I have often found myself in a circle of people at a social gathering that has suddenly closed up like a scrum and left me standing outside it, as its constituent parts became animated in conversation, forgot I was there, and absent-mindedly nudged me out of the loop. I have fought all my life this sensation that shyness is a personal affliction that has left me viewing our herd-loving, compulsively communicative species from the edges. Now I am coming to see it more as a collective problem, an inevitable by-product of the thing that separates us from other animals: our unique human cargo of self-consciousness. For all our need for intimacy, we ultimately face the world alone and cannot enter another person’s life or mind without effort and difficulty. Shyness isn’t something that alienates me from everyone else; it’s the common thread that links us all. | Joe Moran | https://aeon.co//essays/shyness-cannot-be-cured-it-is-part-of-being-human | |
Cosmology | Planets in potato fields and asteroids in gravel: how a Maine highway illuminates the strange history of the solar system | A few Septembers back, on a Saturday afternoon, I took a long drive, from a leafy neighbourhood in Boston, Massachusetts, to the remotest parts of the outer solar system. I set out from Cambridge in a dusty, rented Volkswagen, with my co-pilot Andrew Youdin, a planet-formation theorist from the University of Colorado at Boulder. We drove north to Maine, aiming for Aroostook County, where, stretched along close to 100 miles of small towns, big farms and empty highway, you’ll find the world’s largest three-dimensional scale model of the solar system. The outer edge of this imaginary solar system begins in an easily overlooked wooden box, wedged between two restrooms inside a visitor information centre in Houlton, Aroostook’s county seat. The box contains two small, off-white ceramic orbs. One of them is about the size of a large jawbreaker candy, with ‘Pluto’ engraved on a brass plate beneath it. The other orb, pea-sized and eight inches away on the box’s opposite side, is Pluto’s largest moon, Charon. A brochure from a nearby stand explains that the display is part of the Maine Solar System Model, conceived in 2000 and completed in 2003. According to a map printed on the inside fold of the brochure, the Sun lies 40 miles north, in a small town called Presque Isle. A jawbreaker model of Eris, the far-flung dwarf planet that spurred Pluto’s planetary demotion in 2006, lies 55 miles south, in Topsfield. At the model’s scale, Alpha Centauri, the closest star to the Sun, would be located a bit more than 250,000 miles away, on the dimpled surface of the Moon. To encounter a scale model of the Sun and its planets is to realise that the solar system most of us learn about as children — and continue to envision as adults — does not really exist. A typical classroom poster depicts the planets extending out from the Sun in a close-packed sequence, like stepping stones an astronaut could skip across on a journey to the stars. In comparison with the Sun, each planet is usually shown scaled up tens, hundreds, even thousands of times its actual size. The asteroid belt, if it’s shown at all, is a thick clump of jagged brown rocks — when, in fact, the real asteroid belt is so vacuous that if you ploughed through it in a rocket, you’d be lucky to come within sight of a rock, let alone collide with one. Educators aren’t really to blame: the limitations of the printed page and of the human eye sabotage any textbook’s attempt to accurately depict the vast, haunting emptiness of interplanetary space and the comparatively diminutive sizes of even the largest planets. Shrink the Sun down to the smallest typographic component — like the dot at the end of this sentence — and even Jupiter becomes microscopic, invisible. To bring a model of the solar system into the macroscale requires thinking over the horizon. Call me when you’re at Uranus and I’ll meet you at Saturn, McCartney said This planet-picturing problem was preoccupying Kevin McCartney one summer’s day in 1999 as he left a business meeting at the Houlton Visitor Information Centre and walked through the parking lot to his car to return to his office at the University of Maine at Presque Isle, where he is professor of geology and de facto head of community outreach. McCartney curates the university’s small science museum, located near his office in Folsom Hall. He had recently tried to revitalise the museum with a host of new exhibits, including a scale-model solar system exhibit stretching down a corridor in 1:150 billion scale, one metre per astronomical unit. Visitors loved it, and McCartney mused from time to time about making a proper three-dimensional model, where the Sun and its planets wouldn’t be paltry signs but fully fledged objects, enduring artefacts to be walked around and appreciated for generations. McCartney had once heard of another scale model along a highway in Washington State, where a simple sign marked each planet’s location. On a whim, he set his odometer in the Houlton information centre parking lot, where US Route 1 splits off from I-95, and watched as the miles ticked up during the half-hour drive to his office. In the parking lot of Folsom Hall, the odometer read ‘40’. McCartney went into his office, pulled an old general science textbook from his musty, overstuffed bookshelves, and without sitting down, looked up Pluto’s average distance from the Sun: 40 astronomical units, one for every mile between Houlton and Presque Isle. Handy coincidence, that. He took a deep breath, and consulted the book for a moment more, running the numbers. There are about 93 million miles in an astronomical unit, so a scale model on Route 1 would be 93 million times smaller than the actual solar system. At that scale, the Sun’s sphere would be 50ft in diameter — too big to build in the round, too big to be shoehorned anywhere other than Folsom Hall where McCartney had the greatest oversight and authority. It could probably be represented by a painted archway in the building’s lobby and stairwell. Pluto would be placed somewhere around Houlton. At just under 5ft wide, Jupiter would be the largest construct. He gazed up at the particleboard ceiling and let out a long, low whistle. It could be done. Soon after, he took his idea to James Brown, the director of planning and development for Presque Isle. The pair drove up Route 1 in Brown’s truck, McCartney cradling a chart of planet distances and an area map on his knees, marking Xs at proper locations. In the town of Littleton, there was a barren field where Neptune needed to be. The location of Uranus was in the vicinity of Bridgewater’s town hall. Monticello, the town between Littleton and Bridgewater, was passed over through unlucky orbital spacing. Saturn and Jupiter were both situated near potato fields outside of a town confusingly named Mars Hill. The orbit of Mars coincided with the town border of Presque Isle, and Earth, Venus, and Mercury all fell within its downtown. Each planet’s location coincided with businesses that Brown thought he could bring on board. The rest was just a matter of logistics and hard work. The inner planets and Pluto were put in place by the end of 2000. Jupiter was added in 2001, followed by Saturn in 2002. Uranus and Neptune completed the model in 2003, with a formal unveiling ceremony at Saturn on a sunny June day in 2003. Standing next to Pluto, I called McCartney, reaching him at an office number I found listed on his academic website. After I introduced myself and explained that Youdin and I would like to meet him, McCartney half-jokingly asked if Pluto was still there. Its box is mounted at eye level, and the building is open to all comers, all day. McCartney said he had a half dozen ceramic dwarf planet replacements squirrelled away in a desk drawer for when the planet goes missing. I assured him that Pluto was in place. Finding Pluto soaring eccentrically above the ecliptic was rather like finding a frozen mastodon gliding through the stratosphere ‘Great. Call me when you’re at Uranus and I’ll meet you at Saturn,’ McCartney said, before hanging up abruptly. As we left the information centre, we passed by a small table stacked with brochures, one of which was unmistakably celestial. It read: ‘NEW HORIZONS along the Maine Solar System Model,’ and depicted an object wending its way along the yellow centre line of an S-curving highway that stretched past the solar system’s outer planets and toward a distant star. The object was a NASA spacecraft, a planetary probe that had launched from Cape Canaveral in Florida in January 2006 on a one-way trip to Pluto. The probe is scheduled to fly by the dwarf planet in July 2015 to study its frigid surface, tenuous atmosphere, and fragile system of moons. Shortly after it launched, NASA selected the Maine Solar System Model for inclusion in an associated outreach programme. Around the same time, a vote by the International Astronomical Union (IAU) demoted Pluto to ‘dwarf planet’ status, largely due to the recent discovery of the Pluto-sized planet Eris and other large objects in a region called the Kuiper Belt, at the outskirts of the solar system. Faced with a solar system of only eight planets, or one with potentially hundreds, thanks to all those large Kuiper Belt objects, the IAU chose parsimony over profligacy. The move ignited a firestorm of public criticism and a flurry of news stories mocking pesky astronomers for ruining primary-school curricula. McCartney and company took the demotion in their stride by updating the model with Eris and another reclassified dwarf planet, the large asteroid Ceres. They also built a second Pluto some seven miles north of the first, at the orbital location where the dwarf planet would be when the probe zings by it in 2015. After snapping a few shots of the demoted former planet, the probe will keep travelling outward, and eventually escape our solar system entirely, to drift in the darkness between the stars. If you extend its trajectory upon the Maine Solar System Model, you find that, fittingly, just as the spacecraft reaches the threshold of interstellar space, it enters the depths of the Atlantic Ocean, where the continental shelf gives way to rolling abyssal plain. As we headed for Neptune, Youdin explained that, like all Kuiper Belt objects (KBOs), Pluto is easily distinguishable from the inner eight planets. A schematic illustration of the solar system would reveal that the eight major planets all reside in nearly the same orbital plane, called the ecliptic, which aligns with the Sun’s equator; they also all orbit the Sun in nearly circular, low-eccentricity ellipses. Against this celestial harmony, Pluto stands out like a sour note. For one, its orbit traces a high-eccentricity squashed oval rather than a near-circle, which causes it to spend a good fraction of each trip around the Sun closer in than Neptune. Pluto’s orbit is also inclined to the ecliptic by about 17 degrees. To capture the dwarf planet at the apex of its ascent above the ecliptic, the Maine model would need to place Pluto approximately eight miles above Route 1. It’s strange to see such a wild outlier among the outer planets, for the far reaches of the solar system tend to be rather tranquil. There is so much wide-open space out there that even next-door neighbours are quite far away. Orbital interactions, to the extent that they do occur, transpire on timescales of hundreds, thousands, even millions of years. Because there are no giant planets in the deep solar system, we would have expected objects there to form in placid, nearly circular orbits closely aligned with the ecliptic, and remain that way. Like mammoths that fell and perished in icy crevasses during the Pleistocene, KBOs are ancient history physically preserved in cold storage. They are pristine, deep-frozen records of events that occurred long ago. Finding Pluto soaring eccentrically above the ecliptic was rather like finding a frozen mastodon gliding through the stratosphere: something very big must have thrown it up there. Stare long enough at the jumbled bird’s nest of known KBO orbits, and curious patterns begin to emerge. Pluto is just the largest, most famous member of a large KBO swarm of what are called Plutinos that are locked in an orbital resonance with Neptune, completing two orbits of the Sun for every three that the ice giant makes. In celestial mechanics, an orbital resonance takes place when two orbiting bodies cyclically exert gravitational influences upon each other. Objects in this particular resonance siphon angular momentum from Neptune on one pass around the Sun, and give it back on their second orbit, forming a gravitational interaction that is stable on timescales of billions of years. There was a short encore performance of planet-building about a half-billion years after the main event’s conclusion A full quarter of known KBOs fall in this category. Unlike Plutinos, objects slightly further out mostly follow flattened, nearly circular orbits within the plane of the ecliptic — orbits that, just as we’d expect, seem scarcely changed since their origin in the primordial disk. For obscure reasons, objects in this population are unfortunately called Cubewanos. A final group completes the bulk census of the Kuiper Belt: scattered disk objects, or SDOs, are objects with very high eccentricities that take them more than twice as far as Pluto from the Sun. SDOs also have very high inclinations, so high that some of them orbit at 45-degree angles, relative to the ecliptic. Eris, Pluto’s same-sized cousin, is an SDO. Strange as it sounds, the prevailing modern explanation for this diverse architecture is that the giant planets did not form where they now reside, but actually formed much closer to the Sun, where they all huddled together in a region of space that stretches between the current locations of Jupiter and Saturn. The distribution of KBO orbits suggests that a few hundred million years after their formation, the outer giant planets, Uranus and Neptune, pulled up their stakes from this close-packed configuration and moved, migrating outward to new orbits, running roughshod through a thick disk of planetesimals — kilometre-scale balls of ice and rock leftover from the solar system’s formation — stirring and scattering them into the populations we see today. The planetesimals at the thin outermost edge of the disk barely felt the gravitational effects of Uranus and Neptune, and thus became the placid Cubewanos. Planetesimals in the inner portion of the disk were captured in stable orbital resonances by Neptune to become Plutinos and a handful of other resonant objects. One of them, a Pluto-sized ball of ice we call Triton, even came close enough to Neptune to be captured as its largest moon. Most of the inner-disk planetesimals, however, were flung into high eccentricities and inclinations to become SDOs, albeit only temporarily, for their eccentric orbits are unstable. Eventually, they will swing within range of Neptune, where they’ll begin a long process of redistribution into the other parts of the solar system. Many of these disrupted SDOs get zipped into a gravitational daisy chain that operates between the giant planets. An SDO might orbit inward of Neptune for a million years, but eventually a close encounter with Uranus will toss it closer to the Sun, into the intermediate zone between Uranus and Saturn. A few million years more, and a close call with Saturn will pass the SDO into the intermediate zone between the ringed planet and Jupiter. At certain points in this interplanetary game of hot potato, there is a slight chance that an SDO will collide with a giant planet or be captured into stability, as a moon or member of the asteroid belt, but most reach Jupiter unstable and unscathed. We can confidently say all this because this redistribution is still happening today. Indeed, astronomers have found SDOs on unstable orbits in each successive intermediate zone between the giant planets. They call these infallen SDOs Centaurs, after the mythical hybrid of man and horse, because they resemble comets in composition but asteroids in their orbits. A small plaque at the planet’s base read: ‘June 15, 2002: This site dedicated to Alvin F Reeves the Second, potato breeder, community leader’ At Jupiter, the final stage of the scattered disk’s redistribution occurs. Being far more massive than the other giant planets, Jupiter can put a serious swing into a Centaur’s orbit, easily boosting it into trajectories that can take it to the edge of the solar system and beyond. NASA’s Pluto probe took advantage of this effect when it flew by Jupiter in 2007, increasing its outbound speed by 2.5 miles per second, more than enough to hit heliocentric escape velocity. Most of the Centaur outcasts settle into orbits with periods of many thousands, even millions, of years. They form a vast, loose shell of icy debris that extends outward further than a light year from the Sun. This structure is called the Oort Cloud, after the Dutch astronomer Jan Oort, who predicted its existence in 1950. When tickled by nearby stars or galactic tides, the Oort Cloud will sprinkle a few objects back down into the solar system to become long-period comets. Some of these, by passing chance, could conceivably be perturbed back into the scattered disk on their way down, becoming recycled SDOs that can traverse the gravitational daisy chain anew. The Centaurs that Jupiter doesn’t cast out of the solar system are sent inward instead, where they manifest as short-period comets, eventually evaporating near the Sun or colliding with one of the inner planets. It was by this circuitous route that water and fertile organic compounds fell upon the Earth, helping life to originate and flourish. As Neptune made its way outward, disrupting the planetesimal disk, it stirred up a flurry of comets, turning the inner solar system into a shooting gallery, a confusion of icy crossfire. Versions of this story have been floating around for a long time, and data to support it continues to accrete. In 1694, Edmond Halley made a presentation to the Royal Society suggesting that a comet struck the Earth and caused Noah’s Flood, though not through delivered water, but rather through the slosh of huge land-drowning waves thrown up by the shock waves following the impact. The more modern version can be traced to a 1975 hypothesis from the theorist George Wetherill, who postulated that Uranus and Neptune formed hundreds of millions of years later than the other planets in a burst of activity that showered the inner system with comets. The current version of the story is supported by indisputable evidence, taken from the remnants of Earth’s oldest rocks — resilient silicate crystals 4.38 billion years old, that show signs of impact shock-heating approximately 3.9 billion years ago. Radiometric dates for rock samples returned by Apollo astronauts from the Sea of Tranquility, the Ocean of Storms, and other lunar impact basins all cluster around the same time, as do the ages of impact-generated meteorites thought to originate from Mars, the Moon, and the asteroid Vesta. All of this evidence suggest there was a short encore performance of planet-building about a half-billion years after the main event’s conclusion. Astronomers have given this turbulent, ice-circulating epoch a name: the Late Heavy Bombardment. Despite a long, eye-straining search, Youdin and I never found McCartney’s second Pluto, the one positioned to represent the 2015 rendezvous with NASA. But after passing the aptly named Stardust Motel in Littleton, and doing a double-take at an oversized cobalt-coloured ‘gazing globe’ lawn ornament, we did find the real Neptune, sitting atop a tall steel pole like a blue basketball. The planet looked like it had seen better days — what first appeared to be cotton-puff clouds on closer inspection proved to be white primer peeking out in patches where paint had flaked away. Uranus and Neptune are commonly grouped with the gas giants, Jupiter and Saturn, but they are in fact a different class of planet entirely. Unlike Jupiter and Saturn, which have enormous envelopes of hydrogen and helium, Uranus and Neptune have far less gas and higher proportions of ice. They are diminutive ‘ice giants’, possessing roughly a sixth of Saturn’s mass and a 20th of Jupiter’s. Around 10 miles up the road, past Monticello and just outside of downtown Bridgewater, we passed by Uranus, which was painted robin’s-egg blue and tilted over on its side like a fallen spintop. Lacking a system of large moons and devoid of obvious cloud bands or storms, Uranus is a relatively bland planet that even astronomers sometimes struggle to love. But its odd tumbled-over axis of rotation is compelling to advocates of the Late Heavy Bombardment. ‘Here we have more evidence before us, evidence for major late-stage impacts,’ Youdin said as we rolled by. ‘Back before the solar system settled down, there really were big things that went bump in the night. Something as big as Earth could have walloped Uranus at some point, and set it rolling on its side.’ As Youdin circled the leaning ice giant, I pulled out my mobile phone, dialled McCartney, and told him we would soon be at Saturn. That Uranus and Neptune were born closer to the Sun is considered gospel by most theorists today, and not only because migrating ice giants provide a mechanism for populating the Kuiper Belt and delivering water to the inner solar system. The more important motivation is that Uranus and Neptune are simply too large to have formed in their current locations according to how theorists believe world-building unfolds, via the stepwise collision and accretion of planetesimals that eventually reach ever-larger solid assemblages. Youdin explained that, on the prevailing theory, protoplanets hundreds of miles wide come first, then Moon-to-Mars-sized planetary embryos, and finally full-grown planetary cores up to 10 times the mass of Earth. In the gas-impoverished inner system, the cores were the finish line, and became the terrestrial planets. In the gas-rich outer solar system, the cores swept up thick atmospheres to become the giant planets. Gas and planetesimals were both plentiful where Neptune and Uranus are now, but the material moved so slowly, and was spread out over such vast amounts of space, that assembling ice giants would have taken longer than the current age of the solar system. Night after night, when he returned home from a long day of teaching classes, the sweet smell of apple pies — baked bribes for volunteers — would drift out to the sidewalks and streets surrounding McCartney’s home Glimmers of recognition that planets could radically and rapidly change their orbits within a disk first emerged in the late 1970s and early ‘80s, but the idea of planet migration still remained somewhat heretical. Centuries of consensus held that worlds did not drastically move during and after their formation. Planets were thought to grow from a disk almost like trees from soil. Perhaps they swayed a bit under the gravitational influence of gusts of gas, or clouds of planetesimals, but they were otherwise thought to be more or less rooted in their orbits. All that changed in the 1990s with the discovery of hot jupiters, supermassive exoplanets that swung around their stars in orbits closer than Mercury’s. There was simply no way such large planets could form so close to a star; migration was the only answer. ‘You’ve got to wonder how alive and well the Copernican Principle really is,’ Youdin mused as we departed Uranus for our Saturnian rendezvous with McCartney. ‘It seemed to be growing stronger over the past few centuries as we found many stars, many galaxies, and now many planets. Knowing that’s all out there makes us seem quite typical, but the details of these extrasolar planetary systems have made things much less clear. So many of the exoplanets we’ve found have high eccentricities. A high eccentricity is a scar from a violent past. If planets in our solar system interacted and migrated as much as theory suggests they had to, you wonder how exactly they managed to get back to these near-circular orbits.’ Some 10 miles past Uranus and north of the town of Mars Hill, Youdin and I crested a hill and saw Saturn below, ringed and resplendent on a thick metal pole above a pyramidal concrete base, surrounded by a small gravel parking lot. We parked and got out to stretch our legs while we waited for McCartney. A small plaque at the planet’s base read: ‘June 15, 2002: This site dedicated to Alvin F Reeves the Second, potato breeder, community leader.’ Youdin gazed up in admiration at the canted disk of steel grillwork that represented Saturn’s rings. ‘This is really quite a piece of work — the scale of the rings is a bit off, but they got the axial tilt right,’ he said. ‘The rings would coalesce into a moon if they didn’t reside in Saturn’s tidal disruption radius. They’re made from rock and ice, not gas, so you can see some really clean, exquisite structures — gaps opened by moonlets, resonant patterns of density waves spiralling through the different zones. Maybe a moon’s orbit decayed, and Saturn tore it apart into rings. Or the rings might have been here since Saturn’s formation. But in that case you can’t just lay them down four billion years ago and expect them to persist until today. Maybe they replenish their material from infalling moonlets and little comets.’ I picked up a pebble and arced it overhead, where it clattered and lodged in the steel disk, adding another moonlet to the planet’s ring system. Behind us, the crunch of tires on gravel announced McCartney’s arrival. We turned to see a mid-size blue hatchback pulling into a parking space. The license plate read ‘OLD IRON’, and a Jesus fish sprouting Darwinian legs adorned the bumper. McCartney wore a wiry Lincoln beard mostly faded to salt from pepper, and wore a khaki newsboy cap, spectacles, and a light jacket. He leapt from the car with an energy belying his 59 years and strode forward to gregariously pump our hands in a delicate grip. McCartney has a local reputation as a brash yet gentle man. ‘I’m still amazed all this actually exists. Again, there was no money, no money at all. There were 12 schools involved in all. I have a list of the schools, the classes, the students, and other folks who helped. There are more than 700 names on the list. One of my original goals for the project was to have more than one per cent of the total population of northern Maine involved. You look at the census, and northern Maine’s got about 70,000 people. More than 700 of them worked on this, so we got past that one per cent.” In some ways, building McCartney’s solar system might have been more difficult than gravity’s inexorable assembly of the real thing. In his quest for volunteers, he seems to have cajoled his way across the geographic and socioeconomic width and breadth of northern Maine. He had enlisted a small army of civil servants and upstanding business owners to vouch for him to the Maine Department of Transportation, which eventually approved construction. A civil engineer tweaked the blueprints for the proposed structures until they could, at least on paper, withstand a ‘storm of the century’ and persist maintenance-free for 50 years. High-school students in the steelworking programme at the Caribou Technology Centre forged the metallic cores of Saturn and Jupiter, welding steel plates together in pie sections to form a skeletal sphere. The body-shop programme fit slices of foam into the pie sections, sanded them down until they were spherical, and encased them in fibreglass. Lowboy tractor trailers from the high school’s heavy-equipment programme ferried the assembled planets to multiple locations around Aroostook County for storage, painting, and eventual on-site set-up. McCartney slapped the concrete base of Saturn with satisfaction and told us the entire assemblage weighed approximately two tons. The bases were poured at the former Loring Air Force Base in nearby Limestone by members of the local Job Corps. Installing Jupiter and Saturn required a crane, heavy trucks, and a police escort along Route 1 to safeguard the wide load. Members of a local Kiwanis volunteering club built Saturn’s parking lot and small garden. Teams of students from southern Aroostook vocational schools made most of the posts for the planets and moons, except for Saturn’s, which was constructed by a professional welder. Regional Rotary Clubs made small donations for the project’s meagre incidental expenses. Scout troops, 4-H youth development clubs, and a local Future Farmers of America chapter planted flowers in beds of cedar mulch around many of the models. Even when the project had taken on a life of its own and moved forward through momentum alone, McCartney’s work wasn’t done. Night after night, when he returned home from a long day of teaching classes, the sweet smell of apple pies — baked bribes for volunteers — would drift out to the sidewalks and streets surrounding McCartney’s home. We returned to our cars for the five-mile drive north to Jupiter, past fields of potatoes and a ‘Moose Crossing’ sign The planets have become a daily part of life for residents of Aroostook. They serve as convenient landmarks — a house in downtown Presque Isle is said to be ‘two miles past Jupiter’, and Saturn is a common rallying point for road-tripping caravans. Locals call McCartney when they see something amiss. Someone once snipped Eris off its pole in Topsfield, and Saturn’s rings sometimes accumulate structure-straining drifts of snow that are best knocked off. They have grown accustomed to slow-moving cars filled with rubbernecking out-of-towners, though no one is certain exactly how much tourism the model has brought in. Daylight was fading, so we returned to our cars for the five-mile drive north to Jupiter, past fields of potatoes and a ‘Moose Crossing’ sign. McCartney talked fast, but he drove faster, leaving us scrambling to fasten our seat belts and start the engine just in time to see his taillights vanishing over a hill in the distance. When we arrived, we saw Jupiter, 5ft in girth, peeking out from behind a pulled-over long-haul big rig that was using the parking lot as an informal rest stop. Following McCartney into the centre of town, we traversed the entire inner solar system in less than five minutes, with whole worlds passing by in a twilight blur, too closely packed for any meaningful reflection, ephemeral against the outer solar system’s spacious majesty. We found Ceres, the largest asteroid, serendipitously amid an asteroid belt of sorts — a gravel driveway in a construction zone for a new Jehovah’s Witnesses church. It had the dimensions of a small marble. Just a half-mile north, Mars was a hard rubber baseball sheathed in fibreglass, sitting next to a ‘Welcome to Presque Isle’ sign. A few blocks further, and we found Earth, the size of a large orange, in the parking lot of Percy’s Auto Sales, its golf-ball Moon placed just 16ft away, a concrete reminder of how short a distance we humans have flown from our home. Three more blocks and we passed Venus, a bland white orb outside the Budget Traveller Motor Inn. It occurred to me that, in a scale model that spanned nearly 100 miles from inmost Sun to outmost Eris, these six short blocks represented the entirety of the solar system’s standard habitable zone, where liquid water could potentially exist within a rocky planet’s atmospheric shroud. Tiny Mercury lurked just up the road in the front garden of Burrelle’s Information Services, and gave the greatest surprise of the whole trip. Before the discovery of hot jupiters, Youdin explained, no one really understood just how much empty space lies between our inmost planet and our star. Everyone imagines Mercury hellishly close in, scraping its rock-and-iron face against the solar photosphere, but the scale-model Mercury sits more than three football fields from the Sun. With so many exoplanetary systems sporting packed-in planets, astronomers are baffled as to why the first 30 million miles out from our star are so barren. Inside Folsom Hall, McCartney showed us the Sun, a 50ft yellow-painted archway, and the modest collections of the Northern Maine Museum of Science. A number of empty glass cases suggested that the museum was in decline. McCartney said he was working on grant proposals to revamp several displays. He had been the driving force behind the museum well before its opening in 1996, and had begun planning its creation not long after his arrival at the university in 1988. Nowadays, a third-grader comes through here, and maybe they don’t learn a whole lot, but they find out that science is neat ‘People tell me that if I die, the museum is toast,’ he said, ferrying us past his hallway-scale solar system and cases containing the jawbone of a whale and an embalmed sea turtle. In his office, McCartney said he was trying to find time to prepare for a trip to a scientific conference in Poland. He’d been invited, he told us, because his research had recently ‘gone viral’. He pulled what looked like a black-and-white photograph of a pinwheel from somewhere in the stacks of papers and books that threatened to overflow his desk. The pinwheel was a silicoflagellate, a relatively obscure group of marine single-celled plankton. Their microscopic skeletons constitute approximately one per cent of the silica found in sediments on the deep sea floor, and are used to date core samples and constrain oceanic environmental conditions over the past 100 million years. McCartney is one of the world’s foremost experts in their morphological classification and distribution. On a research sabbatical in 2009, he found and described two new genera and 18 new species. ‘I’ve been trying to find the environmental reasons for certain shapes,’ he told us. ‘There’s a lot to figure out, and I doubt we’ll get it all done in my lifetime. This is just my little brick for the cathedral of science.’ I wondered aloud why he spent so much of his time on side projects like the museum and the model instead of devoting more of it to his research. The corners of McCartney’s mouth turned down for a moment, then he said: ‘I never had anyone teach me about science when I was a kid, and my parents weren’t very encouraging. I remember my first day of school in fourth grade. We had to talk about what we’d done for the summer. One guy came up and said he’d gotten a microscope and had been using it. I raised my hand and asked what a microscope was. I had no idea. Of course the whole class erupted in laughter at this silly, stupid kid who came out of nowhere not knowing anything. So I got in the science section of the school library and read a book. By the end of the year I’d read them all, shelf by shelf. ‘My interest caught fire, but I’m convinced that was only because there was already gunpowder on the ground. What laid out the gunpowder was a visit I made to a museum in second grade, the one at the Academy of Natural Sciences in Philadelphia. Nowadays, a third-grader comes through here, and maybe they don’t learn a whole lot, but they find out that science is neat. That’s the role of a science museum: to be there for that little kid, to give them this precious experience, and years later, who knows how long, that experience flowers. You won’t be there when the flowering happens, but you can be there when it begins.’ | Lee Billings | https://aeon.co//essays/driving-the-solar-system-is-a-lesson-in-space-and-time | |
Philosophy of science | I am a geneticist, my sister works a dairy farm. Our lives are so different we might as well be living in parallel worlds | ‘I had to take a sick baby goat to bed with me last night,’ my sister said. ‘I found her lying in a corner of the greenhouse barn getting ready to die.’ ’Did she make it?’ I asked. ‘Yep,’ said Jennifer. ‘I tubed her and gave her some electrolytes when I brought her in, fed her and wrapped her up in a towel, and took her up to bed. She peed all over me around 5am, so I brought her downstairs and put her in the barrel with the two boys. It’s a bit crowded, but they’re all going out to the barn today anyway.’ Jennifer and her husband Melvin work Polymeadows Farm, a small goat dairy farm and dairy plant in Vermont. They are currently milking about 120 goats. During kidding season, twice a year, the newborns spend their first night in a barrel of hay in the kitchen. This is important during Vermont winters, but also in summer, so that Jennifer knows the kids are healthy before they go out and join the rest. My sister and I live very different lives. She’s a dairy goat farmer and I do genetics research at Penn State University in the middle of Pennsylvania. I spend much of my day at a computer in my small office, or sometimes in the genetics lab that I manage, and she spends her days outdoors, haying, watering and graining her goats, bottle-feeding the babies, milking the dams, or in the dairy plant making cheese or yogurt and bottling milk. I go up to the farm as often as I can, sometimes three or four times a year. The days there are endless in the way that the long summer days of childhood are, capped by the kind of exhaustion that overtakes you when you haven’t remembered to pace yourself. Which you don’t as a child, because why would you? And at the farm you can’t because there is always more to do. When I’m there, the day ends for me only when I’m too tired to do any more, but often for Jennifer (and always for her husband Melvin) the day doesn’t end until well after midnight, when the last of the goats has been milked, the milking machine sterilised and the milking parlour hosed down, the barn checked for newborns, the yogurt put to bed. My sister calls me early every Saturday morning while she’s doing chores: bottle-feeding the youngest kids, haying and graining the rest, or getting ready to go to the farmers’ market. She tells me who’s about to give birth or who just did, what she’s got to sell at the market, what the weather’s like and whether Melvin thinks he’ll be able to hay that day. Fifty years on from Snow, the idea that it’s possible to be well-educated in both science and the humanities is pretty much a pipe-dream One Saturday in early spring, she told me they had plans to go to a play that evening. I was surprised because weeks can go by with Melvin never leaving the farm, and even then it’s usually work-related. Jennifer sometimes jokes that he’s allowed time off only once every two weeks — and then only briefly. I emailed the next day to ask her about their night out: Me: How was the play? Jennifer: We didn’t go. Had a goat in labour — having trouble — at the last minute, then had to feed the little ones after that, and have had to be in and out checking on that doe — she had a nice little girl, but keeps looking like she wants to have another. But I’ve felt inside her twice and can’t find another. Then we went down to shovel out the calf barn. Can’t get anyone else to help with that, because it’s the weekend, and Easter at that!! The life of a small dairy farmer is demanding; even an evening off can require weeks of advance planning, only to be waylaid at the last minute by an animal in need. Being so tied to the cycle of life is a rarity, now that so few of us make our living off the land. And most of us live such non-physical lives that we have to make a point of getting exercise. Jennifer doesn’t have much patience with exercise per se, but she usually tells me this while she’s lugging two five-gallon buckets of water up the hill to the goats in the barn, or 25lb pails of grain to the milking parlour — four at once. Jennifer worked as an occupational therapist until she gave that up to farm full-time. She and Melvin have had goats now for more than 10 years. Unlike Jennifer, Melvin was born into dairy farming. Now in his 50s, he grew up haying the fields he still hays, and milking cows in the same parlour where he now milks goats. He has set up for milking twice a day for most of his life, rounded up cows from the fields and into the holding area in the barn and then into the parlour. He has dipped every single teat in antiseptic, clapped on the cups of the milking machine, and dipped each one again when he was done. He has sent hundreds of thousands of pounds of milk to market in his time, and mucked out the barns countless times. Farmers have an old saying: ‘I’ll keep farming until the money runs out.’ Working a small farm makes for a hardscrabble life — no time off and always something more to do, usually for very little pay. But, after the animals are hayed, grained, watered, milked and midwifed, sick ones taken care of, hooves trimmed, bales of hay stacked in the haymow or the driveway ploughed — and more often than not, the neighbour’s too — broken pipes fixed, the neighbour’s errant cows shooed back into their fields, milk processed and yogurt made, the broken hay tedder repaired midfield, lunch served and dinner prepared; if there’s any time left in the day, and amazingly there often is, Jennifer and Melvin are free to do whatever they want to with it. They answer to no one. They can make jam, build retaining walls, throw together a loaf of bread, make soap, sew scrub suits for the daughter who’s a nurse, install an outdoor wood-burning furnace. Melvin has even been known to come in from milking at 2am and pick up whatever history book he’s currently reading and not get to bed until dawn. It’s a demanding way to live, but Jennifer and Melvin love it. Small farmers have to love what they do, because they’re not in it for the money. In 1959, the English novelist and physicist C P Snow delivered a lecture called ‘The Two Cultures’ at the University of Cambridge. In it he described a divide between scientists and literary intellectuals in the world of academe. He lamented the passing of a time when educated people from all fields shared a common body of knowledge, read the same books, understood — at minimum — the basics of each other’s disciplines, and spoke at least the rudiments of the same language. Snow was saddened that, by 1959, his literary friends no longer understood the concept of acceleration or mass, and his scientist friends no longer read Dickens. Academics measure their year by semester and holiday breaks, farmers measure theirs by season Snow’s concern was not so much that people ought to share a culture for its own sake, but that without a shared intellectual culture, we would not be equipped to address the problems of the modern world. In particular, he worried about overpopulation, nuclear war, and the increasing gap between rich and poor nations. Ultimately, in spite of the two cultures divide, he believed that science would solve these problems, and that, for example, there was absolutely no reason that poverty would not be eliminated by the year 2000. Snow’s lecture was published as a book, Two Cultures and the Scientific Revolution (1959), which is still available today, and still widely read, at least among academics. Some of his predictions, such as the end of poverty, were so wrong that it’s easy to characterise him as terribly naive. And, 50 years on, the idea that it’s possible to be well-educated in both science and the humanities is pretty much a pipe-dream. With the exponential growth of knowledge, particularly in science and technology, it’s hard to see how it could be any other way. The super-specialisation required by an ever-expanding knowledge base does have consequences, beyond mere spats over academic funding. Public health administrators know little about medicine; postgraduates in science studies or bioethics often have very little training in science; geneticists might never have seen a living example of the animal whose genes they study; liberal arts graduates have little idea how to interpret the weekly newspaper stories touting the discovery of genes for this or that disease. Yet the dominance of science and technology over other aspects of our society in the US has become clear in recent times. And this dominance introduces an important layer of artificiality that comes between most people in the industrialised world and the more nuanced natural world in which they actually live. To most people in the industrialised world, food might as well be made by machines, not by plants and animals, so little is the communication and understanding between city and country. Snow’s idea of two cultures seems quaint in today’s university. What he saw as one divide in 1959 has grown, 50 years later, into a never-ending network of hairline fractures and unbridgeable canyons. On any campus today you can almost hear these cracks open and widen. This means that a lot of time and money gets spent on reinventing wheels. It means that people are unaware of important findings in other fields that would be useful for them to know, or that people who should be talking to each other simply don’t. And, if we trust in science and technology to solve most problems, it allows us to absolve ourselves of any responsibility for preventing crises in the first place. This is not a naturalist’s view of science, a view grounded in observation and love of nature. This is a science that can overcome the laws of nature. This is modern science. But when I leave academia behind and am stacking hay or feeding goats, the idea of Snow’s two cultures shrinks into insignificance, or even irrelevance. Viewed from so far away, those two cultures on isolated, protected university campuses (there are good reasons that academia has been called ‘the ivory tower’) merge into one, dwarfed by a much greater chasm — between the academy and the outside world. On one side, at Polymeadows Farm on the hill, there’s hard physical work with survival on the line. And on the other, in the valley I call home, there’s the life of the mind, where the main thing at risk is whether our research papers will be published, and where we have to force ourselves to make a ritual of going for a run, in order to remember that we’ve got a body and that it needs some care. ‘What’s Melvin up to today?’ I asked my sister. ’He’s out tedding the fields,’ she replied. When we had this conversation I had never heard of tedding. Every word I don’t know is a window into the cultural divide, attached as it is to a set of practices about which I know little or nothing. Some words describe equipment I have seen when I drive by farm fields but never thought about — tedders, diskers, seeders — and some evoke a history as old as agriculture itself, since soil has been prepared for tillage as long as people have been planting crops, whether with hand-held sticks, crude implements pulled by animals, or mega-machines specialised to a single task. We’re all far more dependent on the toil of poorly rewarded farmers than the vast majority of research by very well-paid scientists The vocabulary of farming is a constant indicator of the divide, but there are many other landmarks. Separate calendars, for example: academics measure their year by semester and holiday breaks, farmers measure theirs by season — planting, haying, breeding, birthing, harvesting. Or even by weather report. If it’s going to rain tomorrow, there will be no mowing of standing hay today because it won’t dry, but class will still be held. And the seasons are likely to be delimited by events that most indoor-bound workers fail to notice. My sister text-messaged me one late April to say that the barn swallows had returned that very afternoon. The sets of risks that farmers and academics are exposed to scarcely even overlap. Farming has one of the highest accident rates in America, and, to compound the problem, many farmers in the US have no health insurance. Most professors get good coverage through their employer. Society has decided we have very different economic worth, too. Small farmers, on average, earn less than half of what professors do. Farmers are at the mercy of unpredictable events beyond their control — drought, rain, animals contracting disease, the price of grain, the ever-declining price the farmer earns for produce sold at market, the cost of health insurance — while unpredictability has been fairly well eliminated from a professor’s working life. (A professor with tenure, at least.) Yet we’re all far more dependent on the toil of poorly rewarded farmers than the vast majority of research by very well-paid scientists. I recognise that I could make similar comparisons between academics and miners, or soldiers, or athletes or musicians or visual artists. The singular difference here is that farmers provide the rest of us with sustenance. They are tied to the land and the seasons in ways that most of us can, and do, ignore. I was at the farm once when we got word that a grant we’d applied for had been funded. This is always good news to an academic. But out feeding kids, when I explained the project to my sister, she asked: ‘Why is that important?’ Not sceptical, she really wanted to know, but I had trouble explaining. Seen from the farm, C P Snow was right about two worlds that hardly intersect. But he was wrong about which ones. There are two cultures: farming — and everything else. | Anne Buchanan | https://aeon.co//essays/the-two-cultures-of-academia-are-insignificant-here | |
Consciousness and altered states | Digital technology allows us to lose ourselves in ever more immersive fantasy worlds. But what are we fleeing from? | The only people who hate escapism are jailers, said the essayist and Narnia author C S Lewis. A generation later, the fantasy writer Michael Moorcock revised the quip: jailers love escapism — it’s escape they can’t stand. Today, in the early years of the 21st century, escapism — the act of withdrawing from the pressures of the real world into fantasy worlds — has taken on a scale and scope quite beyond anything Lewis might have envisioned. I am a writer and critic of fantasy, and for most of my life I have been an escapist. Born in 1977, the year in which Star Wars brought cinematic escapism to new heights, I have seen TV screens grow from blurry analogue boxes to high-definition wide-screens the size of walls. I played my first video game on a rubber-keyed Sinclair ZX Spectrum and have followed the upgrade path through Mega Drive, PlayStation, Xbox and high-powered gaming PCs that lodged supercomputers inside households across the developed world. I have watched the symbolic language of fantasy — of dragons, androids, magic rings, warp drives, haunted houses, robot uprisings, zombie armageddons and the rest — shift from the guilty pleasure of geeks and outcasts to become the diet of mainstream culture. And I am not alone. I’m emblematic of an entire generation who might, when our history is written, be remembered first and foremost for our exodus into digital fantasy. Is this great escape anything more than idle entertainment — designed to keep us happy in Moorcock’s jail? Or is there, as Lewis believed, a higher purpose to our fantastical flights? Fans of J R R Tolkien line up squarely behind Lewis. Tolkien’s Lord of the Rings (1954) took the fantasy novel — previously occupied with moralising children’s stories — and created an entire world in its place. Middle Earth was no metaphor or allegory: it was its own reality, complete with maps, languages, history and politics — a secondary world of fantasy in which readers became fully immersed, escaping primary reality for as long as they continued reading. Immersion has since become the mantra of modern escapist fantasy, and the creation of seamless secondary worlds its mission. We hunger for an escape so complete it borders on oblivion: the total eradication of self and reality beneath a superimposed fantasy. Language is a powerful technology for escape, but it is only as powerful as the literacy of the reader. Not so with cinema. Star Wars marked the arrival of a new kind of blockbuster film, one that leveraged the cutting edge of computer technology to make on-screen fantasy ever more immersive. Then, in 1991, with James Cameron’s Terminator 2: Judgment Day, computer-generated imagery (CGI) came into its own, and ‘morphing’ established a new standard in fantasy on screen. CGI allowed filmmakers to create fantasy worlds limited only by their imaginations. The hyperreal dinosaurs of Jurassic Park (1993) together with Toy Story (1995), the first full-length CGI feature, unleashed a tidal wave of CGI blockbusters from The Matrix (1999) to Avatar (2009). The seamless melding of reality and fantasy that CGI delivers has transformed our expectations of cinema, and fuelled a ravenous appetite for escape. The world is not made of atoms. It is made of the stories we tell about atoms Video games might have seemed an unlikely escapist technology in the early days of Pong and Pac-Man. It takes a mighty effort of will to see the collection of pixels hovering at the bottom of the screen in Space Invaders as the last star-fighter of mankind. But the working of Moore’s Law — which holds that computing power doubles every two years — meant that, by the early 1990s, video games were jockeying with film to lead the escapism industry. That decade also saw the first waves of cyber-utopianism, although the early promise of virtual reality headsets and internet multi-user domains failed to materialise. Instead, it was our thumbs that did the talking through the control pads of home games consoles with high-definition screens. Super Mario Bros, Sonic the Hedgehog and Lara Croft as Tomb Raider helped to transform video games from childish obsession to mainstream cultural phenomenon. Today, video-game franchises such as Halo, Grand Theft Auto and Call of Duty power an industry worth an estimated $65 billion globally in 2011. But money is only the tip of the iceberg when it comes to measuring the impact of gaming on contemporary culture and society at large. The American video-game designer and researcher Jane McGonigal estimates that there are 500 million ‘virtuoso gamers’ (people who have spent more than 10,000 hours in game worlds) active today. She argues that this number will increase threefold over the next decade: around a fifth of the world’s population will spend as much time in digitally generated worlds as they do in full-time education. We’re embarking on a daring social experiment: the immersion of an entire generation into digitally generated escapist fantasies of unprecedented depth and complexity. And the most remarkable aspect of this potential revolution is how little consideration we are giving it. As the technology of escape continues to accelerate, we’ve begun to see an eruption of fantasy into reality. The augmented reality of Google Glass, and the virtual reality of the games headset Oculus Rift (resurrected by the power of crowd-funding) present the very real possibility that our digital fantasy worlds might soon be blended with our physical world, enhancing but also distorting our sense of reality. When we can replace our own reflection in the mirror with an image of digitally perfected beauty, how will we tolerate any return to the real? Perhaps, in the end, we will find ourselves, not desperate to escape into fantasy, but desperate to escape from fantasy. Or simply unable to tell which is which. Some might argue we are already there. In the sci-fi visions of the American futurist Ray Kurzweil and other prophets of post-humanism, we will upload our minds to silicon substrates, there to be accelerated into super-intelligence and the looming technological singularity. It’s a vision of religious communion now widely parodied as ‘The Rapture of the Nerds’. And yet the digital technologies of today are just the latest in a long progression of tools for the expression of the imagination. We are escaping, not into other worlds, but into imagination. The question is, what are we escaping from? Is it reality? (Whatever that is.) Philosophers have argued for millennia about the nature of reality, an argument that can be broadly classified within two schools of thought: materialism and idealism. Materialist philosophies contend that reality is composed of matter and energy, and that all observable phenomena, including mind and consciousness, arise from material interactions. For many of us, materialism is the only theory of reality that can or should be given any credence: it underlies all of the scientific and rationalist perspectives prevalent in the world today. What is reality made of, if not atoms, or their constituent particles? Idealists, meanwhile, take their cues elsewhere. ‘The world is made of stories, not of atoms,’ said the American poet Muriel Rukeyser in 1968. Her words are a powerful expression of a world-view that materialism cannot accommodate. Idealism argues that reality is constructed by the mind. Consciousness does not arise from material interactions, it is universal; and from consciousness arise all of the material phenomena in the universe, including atoms. The world is not made of atoms. It is made of the stories we tell about atoms. In material philosophy, the act of escaping into our imagination is at best a temporary retreat from reality into fantasy. But in the idealist view, the same act of imagination can reshape our reality. In the modern world, the argument between materialism and idealism has manifested most powerfully in the opposition between science and religion. Science currently has the upper hand, less because of abstract philosophical arguments than because the huge material benefits of science and technology — tangible products of the Age of Reason — have established a materialist world-view as the de facto belief of almost all citizens of the technologically developed world. Moreover, the Western culture of materialist consumer capitalism is subsuming, through globalisation, all the diverse cultures of our world. The dominant global monoculture is a culture of materialism. We see in the New Atheist arguments of Richard Dawkins and Daniel Dennett a triumph of materialist philosophy so complete that even its opponents argue on its terms. God has no place in materialism. There is no higher being of matter and energy responsible for creating matter and energy. The religious fundamentalists who attempt to answer the sceptical claims of New Atheism are themselves rooted in a materialist world-view, leaving a rearguard of idealists to argue, weakly, that God is just a very old word for consciousness and imagination. Idealism has fared better in the arts, where existentialism and postmodernism have attempted to reinstate imagination at the centre of reality. The French philosopher Jacques Derrida’s system of deconstruction, for example, returns us to the idea of reality as a construct that must be broken down to its basic building blocks in language before it can be understood. But while such ideas have dominated academic studies of the arts and humanities, they hold little sway beyond those strongholds. The materialist society of 1980s Britain had a hierarchy, and we were the bottom-rung It is in the popular culture of escapist fantasy that idealism has been reborn, and imagination re-established its footing. When the American comic book writer Stan Lee was looking for ideas for Marvel stories that would ignite the imagination of kids in 1960s America, the gods of ancient myth and legend became a natural source of inspiration. Beside resurrected Norse gods such as Thor and Loki, Lee created a new pantheon of super-powered heroes to entertain his audience. Decades later, the modern gods Spider-Man, Iron Man, Wolverine and Captain America dominate the silver screen, attracting audiences in their millions to worship in the darkened temples of cinemas. And when the director George Lucas set out to make Star Wars, he built it around the American mythographer Joseph Campbell’s ‘monomyth’ — a distillation of the essential values of thousands of religious stories from around the world. There is a long tradition of British writers for whom the resurrection of spiritual myths forms the heart of their own ‘mythopoeic’ work. C S Lewis’s mythology was rooted in Christian allegory, while Tolkien delved back further into the mythic history of the British Isles to a landscape of magic and witchery that has since inspired J K Rowling. Thanks to Harry Potter, there is a generation of children and young adults for whom the ancient rituals of magic are as much a part of life as video games. One of this summer’s most anticipated novels was Neil Gaiman’s The Ocean at the End of the Lane, billed as a fairy tale for grown-ups. From his The Sandman comic-book series to his novel American Gods (2001), Gaiman has made something of a specialism of refitting the world’s mythologies to contemporary culture. There’s a deep irony in the fact that our rational, secular society, driven by science and technology, is emptying out its churches only to reconstruct them as cinemas. Replacing the ‘good book’ with films about Harry Potter and hunger games; reconstructing the inner worlds of our imagination — once the realm of prayer and ascetic meditation — inside the digital domain of computers: it seems that no matter how hard we try to convince ourselves that reality is only material, we continue to reach for the ideal forms that lie beyond. Are we simply recasting age-old delusions for the modern era? The reality I was escaping from as a child was a housing estate in the London commuter belt. A cluster of low-rise tower blocks and prefab houses thrown up, like hundreds of other sink estates around the country, for the swelling population of post-war Britain. By the 1980s, estates had become home to the nation’s burgeoning underclass. Work was disappearing to other countries, in a process of globalisation that is still accelerating today. The social policies of the welfare state, public housing, education and health, which had helped to make Britain more equal, were being unwound by a Thatcherite government whose only interest in the poor was as a flexible workforce. None of which was easily knowable to the youth of Britain’s housing estates. But we could see a simpler truth. On our TV screens and in the shopping centres was the abundant material wealth of consumer capitalism: but it wasn’t for us. Just a few streets away were the suburban homes of middle-class workers. Those weren’t for us either. And the gated communities of the truly wealthy, their country clubs and luxury hotels, were as good as invisible. The hints of them that we did manage to see made it abundantly clear that these weren’t for us either. The materialist society of 1980s Britain had a hierarchy, and we were the bottom-rung. From the perspective of the underclass, material reality is bleak. You’re a survivor of blind evolution, stranded on a muddy rock under the harsh glare of a nuclear sun. Beyond that is an infinite universe of inert matter, dust and devastating radiation that is neither for nor against you, but simply unaware of your existence. There is no God. There is no heaven, or eternal reward. There is only another shift in the factory, or the call centre, or McDonald’s — if you’re lucky. At its determinist extreme, materialist philosophy enforces a strikingly rigid and oppressive social hierarchy. Faced with your own inferiority in this hierarchy, why wouldn’t you plunge into fantasy? Invest your hopes in the teleporter caprices of reality TV, where faux victory in The X Factor or The Apprentice can raise you to the neon-lit stratosphere of celebrity. Light up a spliff and switch on your Xbox. Lose yourself in the colourful pages of comic books. Fulfil your dreams of being beautiful, wealthy, heroic — the centre of a universe built just for you! — and ignore the world beyond your bedsit, in which you are underpaid, unloved and anonymous. But all the while these escapist fantasies are fed by an industry that seeks merely to commodify our dreams and then sell them back to us, stripped of meaning, emptied of the true potential of human imagination. We remain in jail, only dreaming of freedom. The real lesson that poverty teaches is that our society is shaped for those with power. Yet in this, paradoxically, there might still be some hope for our great escape, because all escapism takes us to worlds created through an act of imagination. Hour after hour, we practise what it means to be creators of our own worlds: through the empowered actions of heroes such as Luke Skywalker, or The Terminator’s Sarah Connor; through the creation of our own heroes in games such as World of Warcraft; or even by exploring our own God-like creativity in SimCity or Minecraft. Do our fantasy worlds, then, help us to escape, not from reality, but from our own limitations? Is it possible that we might bring back from our escapist adventures a renewed sense of our own power and creative potential as human beings? In a world that demands ever more of both, this could the highest function of escapism, and the calling that we should demand of it. | Damien Walter | https://aeon.co//essays/does-fantasy-offer-mere-escapism-or-real-escape | |
Economics | The cosy coastal world of pretend farmers’ markets bears no resemblance to the actual back end of America | Every time you set foot in a Whole Foods store, you are stepping into one of the most carefully designed consumer experiences on the planet. Produce is stacked into black bins in order to accentuate its colour and freshness. Sale items peek out from custom-made crates, distressed to look as though they’ve just fallen off a farmer’s truck. Every detail in the store, from the font on a sign to a countertop’s wood finish, is designed to make you feel like you’re in a country market. Most of us take these faux-bucolic flourishes for granted, but shopping wasn’t always this way. George Gilman’s early A&P stores are the spiritual ancestors of the Whole Foods experience. If you were a native of small-town America in the 1860s, walking into one of Gilman’s A&P stores was a serious culture shock. You would have stared agog at gaslit signage, advertising, tea in branded packages, and a cashier’s station shaped like a Chinese pagoda. You would have been forced to wrap your head around the idea of mail-order purchases. Before Gilman, pre-industrial consumption was largely the unscripted consequence of localised, small-scale patterns of production. With the advent of A&P stores, consumerism began its 150-year journey from real farmers’ markets in small towns to fake farmers’ markets inside metropolitan grocery stores. Through the course of that journey, retailing would discover its natural psychological purpose: transforming the output of industrial-scale production into the human-scale experience we call shopping. Gilman anticipated, by some 30 years, the fundamental contours of industrial-age selling. Both the high-end faux-naturalism of Whole Foods and the budget industrial starkness of Costco have their origins in the original A&P retail experience. The modern system of retail pioneered by Gilman — distant large-scale production facilities coupled with local human-scale consumption environments — was the first piece of what I’ve come to think of as the ‘American cloud’: the vast industrial back end of our lives that we access via a theatre of manufactured experiences. If distant tea and coffee plantations were the first modern clouds, A&P stores and mail-order catalogues were the first browsers and apps. Down at the faux-farmers market. Photo by Rebecca Cook/ReutersIn the summer of 2011, I found myself in Omaha, Nebraska, a major gateway to the American cloud, having a surreal conversation over lunch with Gary, a software engineer, and Harpreet, a topologist. Theirs is a calling I could not make up if I tried: maintaining Smalltalk applications for Northern Natural Gas, which operates a sprawling 14,000-mile network of pipelines across the Midwest. Smalltalk, something of an indie darling among programming languages, is to mainstream languages such as Java what J R R Tolkien’s Elvish is to English. You don’t actually expect to encounter it in real life, let alone in the context of critical production infrastructure, any more than you expect to hear Elvish spoken in Congressional debates. The Hamiltonian makeover turned the isolationist, small-farmer America of Jefferson’s dreams into the epicentre of the technology-driven, planet-hacking project that we call globalisation But that is the sort of unexpected juxtaposition you routinely encounter when you explore the interior of the American cloud. It is a world as counterintuitive as shopping at Whole Foods is intuitive. It’s a space where naked technological realities accumulate behind the scenes in order to enable American life. The massive datacentres that have recently retreated into the heartland of the US are merely the latest additions to this orchestra of scaled technologies. Together, these systems constitute a single, intricately interconnected entity, woven from a thousand particular technologies that have made the long journey from garage to grid. I had come to Omaha to explore the American cloud, having succumbed to the Tocquevillean conceit, peculiar to the foreign-born, that one can read America by travelling through it. And so, here I was, discussing continent-spanning infrastructure with hyper-specialised geeks, in a region we romantically associate with the homesteading generalists of historic small-town America. At least in Omaha, such incongruities are readily apparent. In coastal America, where schoolchildren sometimes botch math problems about milk production because they assume a five-day week for cows, the incongruities are masked by the theatre of the shopping experience. But in reality, the cosy coastal world of simulated farmers’ markets and happy cows bears no resemblance to the actual back end of America. The American cloud is the product of a national makeover that started in 1791 with Alexander Hamilton’s American School of economics — a developmental vision of strong national institutions and protectionist policies designed to shelter a young, industrialising nation from British dominance. Hamilton’s vision was diametrically opposed to Thomas Jefferson’s competing vision based on small-town, small-scale agrarian economics. Indeed, the story of America is, in many ways, the story of how Hamilton’s vision came to prevail over Jefferson’s. By the early 19th century, Hamilton’s ideas had crystallised into two complementary doctrines, both known as the ‘American system’. The first was senator Henry Clay’s economic doctrine, based on protectionist tariffs, a national bank, and ongoing internal infrastructure improvements. The second was the technological doctrine of precision manufacturing based on interchangeable parts, which emerged around Springfield and Harpers Ferry national armouries. Together, the two systems would catalyse the emergence of an industrial back end in the country’s heartland, and the establishment of a consumer middle class on the urbanising coasts. But it would take another century, and the development of the internet, for the American cloud to retreat almost entirely from view. By the 1880s, the two American systems had given rise to a virtuous cycle of accelerating development, with emerging corporations and developing national infrastructure feeding off each other. The result was the first large-scale industrial base: a world of ambitious infrastructure projects, giant corporations and arcane political structures. Small farms gave way to transcontinental railroads, giant dams, Standard Oil and US Steel. The most consequential political activity retreated into complex new governance institutions that few ordinary citizens understood, such as the Interstate Commerce Commission, the Federal Reserve, and the War Industries Board. Politics began to acquire its surreal modern focus on broadly comprehensible sideshows. Your Kindle is a product, a store, a shopping cart, and a payment system all rolled together Over the course of two centuries, the Hamiltonian makeover turned the isolationist, small-farmer America of Jefferson’s dreams into the epicentre of the technology-driven, planet-hacking project that we call globalisation. The visible signs of the makeover — I call them Hamiltonian cathedrals — are unprepossessing. Viewed from planes or interstate highways, grain silos, power plants, mines, landfills and railroad yards cannot compete visually with big sky and vast prairie. Nevertheless, the Hamiltonian makeover emptied out and transformed the interior of America into a technology-dominated space that still deserves the name heartland. Except that now the heart is an artificial one. The makeover has been so psychologically disruptive that during the past century, the bulk of America’s cultural resources have been devoted to obscuring the realities of the cloud with simpler, more emotionally satisfying illusions. These constitute a theatre of pre-industrial community life primarily inspired, ironically enough, by Jefferson’s small-town visions. This theatre, which forms the backdrop of consumer lifestyles, can be found today inside every Whole Foods, Starbucks and mall in America. I call it the Jeffersonian bazaar. Structurally then, the American cloud is an assemblage of interconnected Hamiltonian cathedrals, artfully concealed behind a Jeffersonian bazaar. The spatial structure of this American edifice is surprisingly simple: a bicoastal surface that is mostly human-habitable bazaar, and a heartland that is mostly highly automated infrastructure cathedrals. In this world, the bazaars are the interiors of cities, forming a user-interface layer over the complex tangle of pipes, cables, dumpsters and loading docks that engineers call the last mile — the part that actually reaches the customer. The cities themselves are cathedrals crafted for human habitation out of steel and concrete. The bazaar is merely a thin fiction lining it. Between the two worlds there is a veil of manufactured normalcy — a studiously maintained aura of the small-town Jeffersonian ideal. To walk into Whole Foods is to recognise that the Jeffersonian bazaar exists in the interstices of the cloud rather than outside of it. Particular clouds might have insides and outsides — smartphone apps live outside, datacentres live inside; gas stations live outside, oil supertankers live inside — but the cloud as a whole has no meaningful human-inhabited outside. It subsumes bicoastal America rather than being book-ended by it. Between the first supermarket chains that replaced small-town grocers, and Whole Foods, the special effects have improved but what we inhabit is still recognisably a simulacrum of a Jeffersonian past, not the real thing. To pierce the veil, all you need to do is wander around to the loading docks. The Jeffersonian bazaar is no seamless matrix. The modern megacity is arguably the most impressive Hamiltonian cathedral of all, not just because of its scale and complexity, but because it manages to fool us into thinking it is merely a scaled-up small town. We are only now beginning to appreciate just how qualitatively different the modern metropolis is from the village, town or small city. For bicoastal Americans, these megacities are the only Hamiltonian cathedrals they ever see up close, during landings and take-offs. Their flight paths over flyover country are far too high for them to see much else. The veil of manufactured normalcy exists in the workplace as well, but it is necessarily thinner While few Americans see much of Hamiltonian America, they do interact with it extensively, through two distinct interfaces. The first interface connects us to the Hamiltonian world’s core abstractions: the firm, the market and the law, identified as the central abstractions of modern life by the economist du jour Ronald Coase. The mediating objects of this interface are transactional instruments: dollars, votes, contracts, patents, judicial precedents. The second interface connects us to the Hamiltonian cathedrals themselves. The mediating objects here are the metaphor-laden user-experience widgets of everyday life: buttons, shopping carts, light switches, steering wheels, faucets, flush-handles, and trash cans. Some of our behaviours, such as signing up for direct deposit of paychecks, paying with credit cards, scanning bar codes, and accepting Terms of Service agreements on websites, involve primarily the first interface. Others, such as shopping, using gas stoves, boarding planes or watching television, involve primarily the second. Modern technologies — think check-ins, coupons, in-app purchases, gamified interactions, and star ratings in your favourite app — weave both interfaces into one. Your Kindle is a product, a store, a shopping cart, and a payment system all rolled together. Like A&P’s George Gillman before them, modern marketers must synthesise a narrative out of thousands of instrumental interactions with distant artificial realities, humanising them for incorporation into the Jeffersonian bazaar. The process is not cheap, which explains why Whole Foods offers a far more compelling bazaar illusion than Costco. Those who want a more seamless illusion must pay more. The heartland’s Hamiltonian cathedrals are bare-metal techno-cultural spaces. By contrast, the Jeffersonian bazaar is, to a first approximation, a purely cultural space where we can remain human in some unreconstructed, romantic sense of the word. It’s a space within which we strive to render technology invisible to our appreciative senses, while retaining its instrumental capacities. These two interfaces, the conceptual and the metaphoric, both connect us to, and separate us from, the large-scale systems that provision our lives, allowing us to have our technological cake and eat it, too. A touch of humour or irony is sufficient to satisfy even the most refined hipster sensibilities And so this is how we inhabit the American cloud in the modern age. We finance the Jeffersonian consumption of our evenings and weekends through participation in Hamiltonian production models during our weekdays. Of course, the veil of manufactured normalcy exists in the workplace as well, but it is necessarily thinner. And increasingly, for those information and service workers who live in the coffee shops at the heart of the bazaar, there is no need to go there anyway. With each passing year, the Jeffersonian pastoral simulation acquires more precision and refinement, if not historical accuracy. In The Theory of the Leisure Class (1899), the economist Thorstein Veblen described the birth of this artifice in the form of the pastoralised estates of the rich. Today, we are observing the completion of the process and its gradual extension to the non-rich. Even among the poor, visceral encounters with back-end realities like pink slime are vanishingly rare. The arms race between technological forces drawing us out of Eden, and the normalisation forces striving to return us to a simulation of it, is entering its end-game. Of course, we are not entirely unaware of the factory-farming world of food processing firms such as Tyson Fresh Meats and Cargill Foods. We have a dim awareness that our civilisation runs on undocumented Mexican labour, not to mention seven-day weeks for cows and packed feedlots. This is perhaps why the 2013 Superbowl commercial for a Ram pickup truck — a cynical and anachronistic homage to small farmers set to a 1978 speech entitled ‘So God Made a Farmer’ — offended so many viewers, including many who know little about the reality of factory farming. The narrative was just a little too tasteless to be accepted into the Jeffersonian bazaar. Yet when presented with just a bit more taste, we swallow such narratives without much questioning. A touch of humour or irony is sufficient to satisfy even the most refined hipster sensibilities. We do not require our marketing narratives to be true. We merely require them to convince us of our own sophistication. Pop culture plays an important role in the concealment of Hamiltonian realities. Patterns of life in the Jeffersonian bazaar are still derived, via layers of metaphor and symbolism, from pre-industrial realities. But since the illusion is not perfect, we require actors on television to complete the interpretation for us, mediating our relationships to the theatres we inhabit. As a result, American pop culture retains long-term memories of Jeffersonian historical epochs, endlessly repurposing the archetypes of those human-scale eras into contemporary stories. By contrast, Hamiltonian epochs quickly fade from collective memory. Periods of technological equilibrium are punctuated by periods of rapid change, creating technological epochs The brief story of the HBO series Deadwood (2004-06), a western set in the 1870s showcasing a Jeffersonian local polity of gunslingers and prospectors, is now classic television. Tourists still swarm to the real town of Deadwood, South Dakota, making their way from Wild Bill Hickok’s grave site to the patch of kitschy Americana that is its modern downtown. But few venture the three miles to Lead, home to the Homestake Mining Company, founded by the villain of the Deadwood TV series, George Hearst. The mine continued to produce gold for shareholders until 2002, but did not merit its own TV show. I was no exception. Having made my way west from Omaha to Deadwood via North Platte, home to Union Pacific’s Bailey Yard — the largest railroad classification yard in the world — I too decided to skip Lead, and headed instead towards Cody, Wyoming. Cody was home to Buffalo Bill Cody, who transformed the tragedy of Wild Bill Hickok’s West into the farce of Buffalo Bill’s Wild West shows of the 1880s and beyond — one of the first major pieces of theatre to be incorporated into the fledgling Jeffersonian bazaar. A coal mine in Gillette, Wyoming. Photo by Michael Hal/GettyLife in the bicoastal Jeffersonian bazaar simply does not prepare you for the sights, sounds and proportions of the Hamiltonian heartland. On the way west to Cody, drawn by the sight of a power plant that loomed menacingly through rain and mist above Interstate 90, I stopped in Gillette, Wyoming, a town of fewer than 29,000 people that bills itself the ‘energy capital of the nation’. Having got lost trying to find my way to the power plant, I ended up at a public viewing area overlooking the Eagle Butte coal mine seven miles out of town. The viewing area features a massive excavator bucket and a tire taller than most humans. A single tire used in mining, I later discovered, can cost $40,000-$70,000. A full set can cost more than an inexpensive home. Contemplating giant tires is one way to appreciate that you have entered a different world. The Hamiltonian heartland is a land of Brobdingnagian and Lilliputian proportions. It is a land of cryptic and inaudible conversations between radio-frequency ID scanners and passing railroad cars, and records too numerous for the Guinness Book to track. It is also a land of millions upon millions of serial numbers: on doors, pieces of equipment, cowlings, pipes, pylons, and an ocean of smaller technological artefacts so vast that, in economics classrooms, they have to be collectively obscured under the label widgets. If the proportions of the Hamiltonian heartland defy our spatial intuitions, its pace of evolution defies our temporal intuitions. In the time it takes for the heartland to change significantly — 50 to 70 years on average, in the case of major technologies such as the railroad — human-scale heroes typically grow old and die. But change does occur. Periods of technological equilibrium are punctuated by periods of rapid change, creating technological epochs. Each such epoch creates a new layer of visible changes in the Hamiltonian heartland, and corresponding changes in institutions. These epochs are defined by their abundances and scarcities. Over the past half century, we’ve been learning how to stop wasting oil and how to start ‘wasting bits’, as Alan Kay, one of the inventors of Smalltalk, put it in the 1970s. That knowledge is now transforming the heartland. From the Smalltalk-speaking computers of Northern Natural Gas to the Union Pacific control room operating the gigantic railroad switchboard that is Bailey Yard, to the missile silos dotting the badlands of South Dakota, our Hamiltonian heartland is being slowly transformed by software and energy-efficiency technologies. We’ve gone from neighbourhood farms to five-day cows to a world where horsemeat and beef can get accidentally mixed up in the meat cloud But while large-scale shifts can create drastic changes, they are not clean breaks from the past. Every shift also leaves a good deal unchanged. We can detect major shifts when Hamiltonian cathedrals move or transform at the bare-metal level. The rise of steam, for instance, led to a gradual drift of factories away from water power sources, and the decline of mill towns. It also triggered a simplification in the internal structure of factories, as the belts and pulleys that linked the machines to the mill vanished, to be replaced first by steam engines and pipes, and eventually by wiring and electric motors. It was a shift similar to the one from mainframe to personal computing. Once a Hamiltonian technological layer settles, its Jeffersonian interfaces also settle. Firms, markets and the law stabilise, the physical last mile hardens, and the little patch of interaction metaphor in your living room becomes familiar. For a while, there is peace. We buy our iPhones, trade somnolent stocks for more exciting ones, acquire new instrumental skills for production and consumption, and swarm into new patterns of life and work. With each new technological layer, and each evolution of Jeffersonian interfaces, more Hamiltonian realities recede into the cloud. Serial numbers, perhaps the most resistant traces of Hamiltonian realities within the Jeffersonian bazaar, are finally succumbing. We no longer remember any telephone numbers besides our own. IP addresses have given way to domain names. And even those are vanishing into the memories of apps. This process of retreat is not new. We’ve gone from neighbourhood farms to five-day cows to a world where horsemeat and beef can get accidentally mixed up in the meat cloud. But the retreat of numbers completes the veil in a deep way. ‘Software is eating the world,’ as Netscape’s co-founder Marc Andreessen put it in The Wall Street Journal in 2011, allowing us to seal the last reality leaks. Our metaphors struggle to keep up not just with changing realities, but the growing intricacy of the interfaces to those realities. The metaphor of a passive and translucent veil separating the human and technological worlds has been transformed into the actuality of skeumorphic linings obscuring bare metal. Today’s city dwellers rarely realise just how much steel surrounds them wherever they go. The interfaces have thickened and acquired intelligence in proportion to our desire to manipulate Hamiltonian reality. To the Jeffersonian sensibility, Hamiltonian cathedrals are often little more than infrastructure porn. But to establish a direct, appreciative relationship with these technologies, unmediated by instrumental metaphors and currencies of interaction, you have to walk among them yourself. You have to experience train yards, landfills, radio-frequency ID-tagged seven-day cows and other such backstage oddities in the flesh. Thomas Edison thought the gramophone would be primarily used as a dictaphone; instead, it revolutionised music But it is not sufficient to simply get into a car and drive into flyover country. You need a literate — and literary — perspective, to appreciate what you see, since there are no marketers around to do the work for you. My own perspective is something like an ironic religion of technological mindfulness, one whose central perceptual act is the projection of a cryptic agency onto the whole that is neither malevolent, nor benevolent, but merely something to be decrypted. In this act of ritual decryption, it is useful to begin with the assumption that decrypted realities are unlikely to revolve around human concerns. The technological might have originated in the human, but its essence is neither human nor transhuman. It is simply non-human. There is no necessary relationship between what technology does and what humans want. Thomas Edison thought the gramophone would be primarily used as a dictaphone; instead, it revolutionised music. Lee De Forest thought the ‘wireless’ would take high culture to the masses; instead, it created popular music forms that ended up marginalising classical music. The internet was conceived as a communication system that could survive nuclear war; today we use it to trade kitten pictures. For me, the ritual pilgrimage into the heartland is an opportunity to reconnect not with the bare-metal cloud in the abstract, but with a thousand particular clouds, each with its own visible motif. Sometimes we can name the motifs that attract our unsupervised attention: pylon, container, landfill ventilation pipe, big tire. Other times, we can only pause and remark to ourselves: ‘That’s an interesting-looking widget. I wonder what it’s for?’ My pilgrimage into the Hamiltonian heartland ended in Gillette. After contemplating the giant $40,000 tire, I managed to find my way to the power plant that had tempted me off the highway. It turned out to be the Wyodak steam-electric plant, ‘America’s largest air-cooled power plant’. From Gillette, I made my way to Cody, and on through Yellowstone National Park to the home of a wealthy friend in Jackson Hole, perhaps the most complete Jeffersonian bazaar in America — a Whole Foodsian Eden so flawless that only the seriously rich can afford to live there. Someday, technological utopians hope, we might all be able to live somewhere like Jackson Hole. That is, if our Hamiltonian cathedrals don’t come crashing down on us first. | Venkatesh Rao | https://aeon.co//essays/america-still-has-a-heartland-it-s-just-an-artificial-one | |
Childhood and adolescence | Imaginary friends can seem to have a life of their own. Where do they go when their human creators grow up? | No one in my family knows where those names came from, least of all me. Temmy was a boy, Clugga a girl, and I remember them now in the way you might remember beloved cousins not seen since childhood. They were my friends, constant companions. I can’t tell you what they looked like, but they were vivid to me then. I knew that I’d made them up, and yet they were emotionally vital and cognitively alive, physically absent but psychologically present. As a child, you don’t question your imagination; often, it seems to be both the safest and most interesting place of all. My mother thinks Temmy and Clugga were born of self-defence (I was the youngest of four) and a need for company (my nearest sibling was six years older). They also contributed to an ingenious ruse to avoid practising the violin. I started learning the instrument young, aged three, and insisted that my imaginary friends learnt too, so my mother and I would sit in respectful silence as one after another practised their scales. My mother, also the youngest of four, was almost irrationally patient and forgiving in the face of their demands. She made space for them in the car, and sympathised if one was sat on or accidentally left behind. As she tells me this now, I am bewildered. She was colluding with insanity, surely, sustaining an unreality that must have aggravated my family beyond all reason. There were moments, she says — as she sat listening to one of them play a violin piece only for me to say that they’d made a mistake and must repeat it — that she wondered if she might be going mad. But then she’d had an imaginary friend too, which probably helped. When she was little, a small devil called Tarquin lived beneath her bed. He was half-demon and half-protector — my mother was terrified of him, jumping into bed to avoid his clutches, but she also knew that he guarded her as she slept. My Temmy and Clugga weren’t nearly as interesting or ambiguous — more like bland ego extensions, willing and obedient minions who could be corralled into whatever laborious or mundane fantasy I had concocted that day. As far as I can remember, there were a lot of meals, which included protracted displays of laying the table and clearing up afterwards. The potential for an imaginary paradise, free of chores, evidently passed me by. Not long ago, I took part in some research into imaginary friends undertaken by the psychologists Karen Majors and Ed Baines at the Institute of Education in London. There was something a little unnerving about filling in a questionnaire, at the age of 31, on the subject of Temmy and Clugga. Why did I think I’d created them? How old was I when they arrived? How long did they last? Had there been any disadvantages to having them? I worry about the quality of the guidance offered by Twingle Twanx (but perhaps I malign the wise young Martian) Most of these I had no definite answer to: they were just there, an unquestioned, unexplained fact. But I was clear on the disadvantages — there were none. At the time, perhaps I had the odd embarrassing episode of being caught mid-invisible-conversation by someone beyond my family who was yet to be inducted into its imaginary branch, but I don’t remember any humiliation, only the happiness of always having friends around. Now, I feel almost proud of them, in the way that it’s smugly pleasing to look back at your childhood and detect something vaguely exceptional about it. Yet there’s nothing remotely exceptional about imaginary friends — many children have them, and they generally conform to common patterns. According to Majors and Baines’s survey of 594 adults who said they’d had imaginary friends, 81 per cent had lost them by the age of 10 (this made me worry a bit for those who doggedly carried theirs into secondary school). The vast majority of imaginary companions came in human form (68 per cent), though there were some animals too (15 per cent) and a small number, of whom I immediately became jealous, who had friends with magical powers (seven per cent). Inevitably, it’s the snippets of anecdote in the research that bring it to life. Like the lucky child who had been friends with Twingle Twanx, a little Martian. Or the one who had invented a varied cast including Cowmitt, a grandfather; Kerry, a young girl with a strange accent; and Budda, ‘a young Indian boy in a smart suit’. Only a tiny minority had friends with negative characteristics, who teased them or caused trouble. Most were simply companions, there to help populate their pretend worlds, play games or offer comfort. A smaller and more poignant proportion helped their creators overcome loneliness (41 per cent), and escape reality (38 per cent), or provided guidance (23 per cent), though I worry about the quality of the guidance offered by Twingle Twanx (perhaps I malign the wise young Martian). But memory distorts. As adults, our understanding of our imaginary friends and the reasons for their existence is inevitably shaped by the broader feeling we have about our childhoods. A distant recollection is very different to the account of a child still in the thick of the imaginative experience. I remembered that a former colleague had once told me about his son, Joe, who’d introduced a cast of invisible characters to their family. One Friday afternoon, Joe and I spoke on the phone: Joe: There’s a baby called Rocky and he has glasses. There is someone called Tom and there’s someone called Mary, and they’re married, and they’ve got Rocky. They live in, well I made up, a planet called Sweek.Me: What’s it like there?Joe: Very hot. All the time. There are countries and lots and lots of factories all round it.Me: What do the factories make?Joe: All sorts of things. Scooters, bikes, even food.Me: And what do Mary, Tom and Rocky do?Joe: Tom is a shopkeeper and Mary is a photographer. Rocky just goes to school.Me: Do you go and see them or do they come and see you?Joe: I go there.Me: So how do you get there?Joe: I sort of think that Sweek is actually my room, because I can’t go in a spaceship and go to Sweek, 1) because I don’t have one and 2) because Sweek isn’t real, it’s just my imagination.There was something about that numbered list, and Joe’s matter-of-fact tone — I could almost hear him sigh as he patiently explained the mechanics of his imagination to a novice — that revealed a wonderfully blunt pragmatism about a fantastic situation. Joe, aged six, is fully aware that Tom, Mary and Rocky aren’t real, that he is their author, and that his room — his private space — is where they come to life. But they’re clear in his mind: Tom and Rocky both wear glasses. Tom has orange hair; Rocky’s is black and spiky; and Mary, the photographer, has hair down to her shoulders. Sweek has its own language, Screek (Joe taught me to say Yes and No — Lute and Do) and some magical properties: ‘Like, sometimes, people who are really special, like the vicar and me, they can make a force field that stops bad people going away.’ I never found out who the vicar was. Dr Majors has been investigating the subject of imaginary friends for some years now, interviewing both children and adults. Her mother told her that, when they were children, she and her sister would fight over who’d sit next to the imaginary tiger at the dinner table, though she has no memory of him. There is, she says, no shared characteristic of the children who invent companions — apart from the capacity for rich imagining. Many are like Joe, whose friends are principally for fun and entertainment, but she cites examples of children with speech and learning difficulties, Down syndrome and autism (who are often assumed not to have significant capacity for imagination or empathy), as all having imaginary friends. The reasons that children create imaginary friends are as varied as the children themselves. Majors refers to the work of Donald Winnicott, the mid-20th century psychoanalyst and paediatrician, for a potentially unifying explanation. Winnicott developed the theory of the ‘transitional object’ — the comfort blanket or toy that reassures a child when she is alone or trying to sleep. Imaginary friends, it is thought, are part of the same family — they help children to find a sense of themselves, and accompany them through crucial years of development and adjustment as they become their own individual beings, separate from their mother. They are by definition temporary: there to serve a purpose, and then discarded. A work of fiction, then, can only be successful if it is animated by some living energy distinct from the controlling hand of the author In her interviews with children, Majors noticed a marked difference between those, like Joe, who were still in the midst of their imaginary friendships and those who had recently left them behind. The newly emerged didn’t want to discuss their imaginary friends at all — not necessarily out of embarrassment, but because they were simply no longer around, and therefore irrelevant. The experience, as she put it, had gone ‘underground’ — the sad moment that signals the onset of self-consciousness, when the imagination starts to limit itself, sated by reality. The child simply didn’t need the imaginary friend any more — she’d probably made more real friends at school, and felt more confident out in the world. So what happens to the imaginary friends? They are abandoned, frozen in time, consigned to memory and anecdote. But our imagination doesn’t die with them. A study by the University of Oregon psychologists Marjorie Taylor and Candice M Mottweiler in the American Journal of Play from 2008 asked children with imaginary friends where the friend went when he or she was not with the child. ‘He goes into my head,’ said a five-year-old boy. And another: ‘He goes in my mind and the world in my mind is called Neoland … I have two lands in my mind.’ We all have two lands in our minds, I suspect, and they both live on long after our imaginary friends have faded. One part of us marches forth into the world and plays along, working and striving and performing as a sane and dutiful citizen, sibling, parent and friend. And then there’s the more lawless part, the second land that unfurls behind the scenes, gives space to wilder dreams or to thoughts less explicable by language, unhooked from reality. We do not, just because we grow up, lose our capacity for fantasy, or imagination; it simply comes out in other ways. There are some examples of imaginary friends persisting into adulthood: Kurt Cobain of the band Nirvana had an imaginary friend called Boddah when he was a boy, and it was Boddah to whom he addressed his suicide note at the age of 27. A few adults in Majors’s research revealed that their imaginary friends had remained as some sort of presence throughout their lives. But perhaps that second land, the one of uninhibited fantasy, just changes shape and finds itself played out in adult unrealities — in the diversions we seek through novels, films, art. The percentage of writers in the study who reported that they had imaginary friends as children was more than twice the average But consumption is not the same as creation: it’s one thing to indulge in someone else’s story, and another to invent that story oneself. A study led by Oregon’s Marjorie Taylor, published in the journal Imagination, Cognition and Personality in 2003, suggests that the closest parallel in adult life to the invention of imaginary friends is the fiction writer’s creation of character. The study’s authors call the phenomena ‘the illusion of independent agency’, which ‘occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions’. An author invents a character on the page, and before long feels like that character has a life of its own, and that the author is simply there to record its independent decisions and movements. Not long ago, I interviewed the author Hilary Mantel and she explained her writing process as being similar to that of a medium, like the character of Alison in her novel, Beyond Black (2005). This is how Mantel describes Alison at work: ‘She starts a peculiar form of listening. It is a silent sensory ascent; it is like listening from a stepladder, poised on the top rung; she listens at the ends of her nerves, at the limit of her capacities … The skill is in isolating the voices, picking out one and letting the others recede.’ Mantel described herself as ‘skinless’, so highly sensitive that she can become aware of other beings who seem to already exist beyond the limits of her own mind. She does not consciously invent her characters so much as tune in to their presence. In her memoir Giving Up the Ghost (2003), she reveals that she spent much of her time as a child pretending she was a medieval knight, obsessed with King Arthur and his court. Her adult occupation is simply an advanced form of her childhood game: she still spends her time in the vivid company of imagined historical characters, but is now able to commit that fantasy to the page. E M Forster had it slightly differently. In Aspects of the Novel (1927), he wrote: ‘The characters arrive when evoked, but full of the spirit of mutiny… They “run away”, they “get out of hand”: they are creations inside of a creation, and often inharmonious towards it.’ Not only that: if the author tries to keep his characters too ‘sternly in check, they revenge themselves by dying, and destroy [the book] by intestinal decay’. A work of fiction, then, can only be successful if it is animated by some living energy distinct from the controlling hand of the author. The characters must have their own agency, and rebel. The adult writer, like the child with her chorus of imaginary friends, knows that his characters are fictitious. Both are aware that these people are born of their own minds, but both also have enough faith in the imagination to respect its offspring as in some ways autonomous. Taylor suggests that both adult and child have developed ‘expertise in the domain of fantasy’. That is, they have become so good at imagining their friend or character that they are no longer conscious of the process of creation — the friend or character seems to arrive automatically, fully formed. There’s another link, too: the percentage of writers in the study who reported that they had imaginary friends as children was more than twice the average. These people have been pretenders all their lives. Towards the end of my phone conversation with Joe, I asked what he thought would happen to his imaginary friends, how he thought they might change with time. ‘I never think that anything’s changing,’ he said. ‘They’ll always be there, they’re never going to die. Because people in Sweek never die, that’s how special Sweek is.’ I might have thought, once, that this was yet another quirk of a child’s fantasy life: how sweet that he thinks his invented world and friends are immortal. But in a way he’s right. Fictions could outlast every one of us. | Sophie Elmhirst | https://aeon.co//essays/let-me-introduce-you-to-my-imaginary-friends | |
Biology | The strange biology of island populations highlights the role of chance, not just selection, in evolutionary change | Over a pint of beer, the great biologist, polymath and pub-lover J B S Haldane was asked if he would give his life to save his drowning brother. He is supposed to have said: ‘No, but I would to save two brothers, or eight cousins.’ He was referring to one of evolution’s puzzles: why animals (including humans) help one another. Under Darwinian natural selection, shouldn’t individuals always behave selfishly in order to maximise their chances for reproduction? Starting in the 1930s, Haldane was one of the first biologists to explain altruism by what we now call ‘kin selection’. An individual who is inclined to help family members is acting selfishly, from the point of view of their genes, as they are helping to ensure the reproductive success of their shared genetic material. You share, on average, half of your genes with your brother and an eighth of your genes with your cousin, hence Haldane’s nerdy joke. Although Haldane apparently understood the principle of kin selection, it was a further 30 years before another English evolutionary biologist, W D Hamilton, nailed the mathematics of the theory in The Genetical Evolution of Social Behaviour (1964), one of the most important works in the field of evolution since its inception. The Selfish Gene (1976) by Richard Dawkins, and many other popular science books, were based on kin selection theory, which exposed the selfish machinations and calculations inherent in apparently altruistic behaviour. So why hadn’t Haldane — a brilliant and inventive biologist — taken the idea of kin selection to its natural conclusion? In a startlingly honest interview for the Web of Stories website in 1997, the eminent evolutionary biologist John Maynard Smith, a former student of Haldane’s, said that this failure was partly political: I have to put it down, to some extent, to political and ideological commitment… We were, I think, very reluctant, as Marxists would be, to admit that anything genetic might influence human behaviour. And I think that we didn’t say consciously to ourselves that this would be un-Marxist so we won’t do it, that’s not the way that the mind works; but it was a path that our minds were not, so to speak, prepared to go down, in quite an unconscious sense, whereas Bill [Hamilton] was very prepared to go down it… to make big breaks in science, which Hamilton did, it’s not enough to have the technical understanding of some technical point, it’s got to fit in with your world view that you should pursue this road.Evolutionary biology, perhaps more than any other branch of science, is a political beast. In trying to explain how life arose, evolutionary biologists are tackling the prickly question of how — and why — we exist. Therefore it is not surprising that, as evolutionary biology advances, it is scrutinised and criticised every step of the way, down to its most fundamental level. We all know that some religious groups reject evolution in favour of a literal, biblical creation; and that others object to the social and moral implications of describing human as just another animal species. But there are also the internal political struggles, which occur both within and between evolutionary biologists. As Maynard Smith’s recollections of Haldane indicate, there was an internal conflict, conscious or subconscious, between political and scientific viewpoints. After watching Maynard Smith’s interview (conducted, incidentally, by Richard Dawkins), I wondered whether these interfering world views were just a symptom of the charged political climate of Europe in the 1940s and ‘50s. Or perhaps they were intrinsic to evolutionary biology itself. And so it was with Maynard Smith’s words in my mind that I, an impressionable junior scientist, recently began to doubt some findings of my own. In 2009, I spent several months catching Berthelot’s pipits across islands in the North Atlantic. Every bird I caught was a precious data point, and I carefully measured each one, from beak to tail. Thanks to long-term research on Darwin’s Galapagos finches, island birds are poster boys for natural selection. Every biology textbook shows the way in which these species arise on different islands, each subtly different, with a different-shaped beak that is perfectly adapted to the food that it eats. Every undergraduate knows that these variations between finches on the Galápagos Islands gave Darwin some of his most important insights into how populations diversify and change over time. My colleagues and I knew that our pipits differed across islands, not just in beak shape but in overall size, the length of their legs, and the size of their wings, too. And these differences were substantial — pipits on one island could be consistently 10 per cent larger than another. Imagine, as a comparison, if one country’s people had heads that were a pound heavier than a neighbouring country’s. How was natural selection driving these pipits apart — was it all about food, or did sex, predators or parasites have a role to play? We were looking for a close fit between the environmental challenges that the birds faced in particular places, and the adaptations that had responded to those challenges. This would be a neat story of natural selection. But when I looked at the data, it seemed that none of these factors were responsible. Instead, the differences in these birds seemed to be down to history and chance. You will probably have heard that ‘scientists have used DNA evidence to uncover our African origins’. Indeed, it now seems certain that modern humans originated in Africa, and it is thanks to DNA that we can say this with such confidence. We know how DNA is inherited and how it changes over time, and can therefore make specific predictions about how much genetic diversity we expect to see in a group of individuals. Using increasingly complicated computer models, we can use these predictions to determine the exact signatures that different colonisation routes have written into our genes. And with modern DNA sequencing technology, we are able to get hold of huge amounts of genetic information from individuals across the world quickly and (fairly) cheaply. The language we speak, the shape of our skulls, even our ability to fight certain diseases, can be predicted with a remarkable degree of accuracy, just from our distance from Africa By comparing what the models predict about the patterns of DNA variation to what is actually observed, biologists have mapped our ancestors’ movements out of Africa in astonishing detail. Around 60,000 years ago, humans ventured out of Africa into the Middle East, and over the following 30,000 years their descendants spread across Europe and Asia, heading south-east through Thailand and Vietnam, across Indonesia and further south to Australia. It was not until around 15,000 years ago that a group from north-east Russia ventured across the Bering Strait, and colonised America from north to south. There are tens of thousands of years of history hidden in our genes. But figuring out where we came from is just the beginning. What we really want to know is if and how our ancestors’ historical movements shaped who we are today. And the answer is that yes they have, dramatically. The language we speak, the shape of our skulls, even our ability to fight certain diseases, can be predicted with a remarkable degree of accuracy, just from our distance from Africa. To understand how this occurs, we need to think about how humans moved from one place to another. Travelling across unknown lands is a dangerous business, so we would expect ancient humans to avoid travelling a long way except when they needed to, and they probably didn’t travel en masse. Instead, when humans colonised new lands, it was probably a few intrepid explorers looking for new pastures. And the DNA evidence bears this out. Across the world, we see ‘bottlenecks’ at the genetic level — signatures where a small group of individuals, carrying a relatively small number of genetic variants, have set up new colonies. This is the key to understanding why the most genetically diverse human population can be found in Africa, while the populations of further migrations are descended from a much smaller stock of brave (or desperate) migrants. When a new population is founded from just a few individuals, blind luck suddenly becomes very important to its biology. On the island of Tristan da Cunha in the South Atlantic, there are fewer than 300 permanent residents, and around half of them suffer from asthma. Researchers from the University of Toronto who studied the disease among islanders in the 1990s could find no apparent environmental or hygiene reasons for its high incidence, and concluded that genetics must play a role. The 282 residents they studied were descended from just 15 original settlers, who, it turned out, had an usually high prevalence of asthma among them. Because the founders were forced to interbreed, asthma increased in prevalence throughout the population. Scientists call this phenomenon a ‘founder effect’. It is, in more technical terms, the change in frequency of a trait when a new population is formed by a small number of individuals. And it is founder effects such as this that have left their signature, in our bodies and in our genes, to spread like ancient footprints out of Africa to the rest of the world. Dawkins was fairly on-the-button when he described Not in Our Genes as a sort of scientific Dave Spart trying to get into ‘Pseud’s Corner’ If founder effects can explain so much variation within a species, is it possible that they can also account for bigger differences, even new species? In an article entitled ‘Change of genetic environment and evolution’ (1954), the evolutionary biologist Ernst Mayr suggested that founder effects could lead to entirely new species being formed. An ornithologist by training, Mayr had spent several years collecting birds in New Guinea for taxonomic study. He noticed that the bird species found on the most isolated, outer islands tended to be very different from their mainland relatives. Mayr wondered whether evolution might ‘speed up’ when a few individuals colonise a new area. After further research, he argued that, in small populations, new combinations of interacting genes could arise, which would in turn interact with natural selection and cause the population to undergo what he called a ‘genetic revolution’. Evolution would take a whole new path, and new branches of the evolutionary tree would eventually form, much more quickly than if natural selection alone had been the guiding force. Today, almost all evolutionary biologists agree that founder effects occur, and can explain variation among individuals within a species. But whether they persist over evolutionary time, and especially whether they are involved in the formation of new species, is a point of contention. Experiments with small fruitfly populations in the lab have almost all failed to produce the expected evolutionary change. There have also been theoretical critiques of Mayr and other proponents of his theory. To boot, there were problems with finding evidence in the wild. Human populations are very closely related, and any divergence has occurred, in evolutionary terms, relatively recently. But biological species are often separated by millions of years, and if any founder effects had occurred by this point, their footprints might well have been erased. The consensus is that, aside from a few examples, founder effects have not been a major force in shaping the tree of life. And it is here that I return to my island pipits, to John Maynard Smith, and to politics. Looking at my data, it seemed to me that founder effects were responsible for the differences that had arisen in these birds. I felt as though I had one of the clearest examples of how founder effects could persist in the wild, for many thousands of years. But at the back of my mind were doubts; island birds, remember, are emblems of the explanatory power of natural selection. Founder effects themselves are not adaptive, but the artefacts of colonisation by a small group of individuals. And there was Maynard Smith’s confession, which made me wonder if I’d been primed to find these results. Probably the biggest controversy in modern evolutionary biology began with E O Wilson’s landmark book Sociobiology: The New Synthesis (1975), in which he argued that many aspects of human behaviour could be explained by natural selection. A year later, in The Selfish Gene, Dawkins popularised the gene-centric view of evolution. The critics, led by Stephen Jay Gould and Richard Lewontin, were swift to respond. They accused Wilson and Dawkins of having a right-wing political agenda, of biological determinism, of justifying the status quo, and of excusing selfish behaviour and societal inequality. One of the most divisive questions was the relative importance of selection versus chance in evolutionary change. Gould and Lewontin summed up their arguments in the essay ‘The Spandrels of San Marco and the Panglossian paradigm: A Critique of the Adaptationist Programme’ (1979). They argued that adaptationists such as Wilson and Dawkins wanted to interpret everything in the natural world as being shaped by natural selection to be perfectly suited to it environment. By contrast, Gould and Lewontin emphasised the improvisational tinkering that marked evolutionary development. Many of an organism’s features, they said, were like the spandrels or spaces between archways in a building such as St Mark’s Basilica in Venice: beautiful in their own right, and looking as if they were designed to be so. But in fact spandrels are just a by-product of using an arch to support a domed roof. Similarly, many biological traits or characteristics look as though they have a purpose, but are actually just by-products. For another example, the human brain is capable of completing the Times crossword, but nobody would seriously argue that the brain evolved for that purpose. Here I was, a left-wing scientist, with a scientific narrative that mirrored my political views. Had I, somehow, skewed my interpretation of pipit variation to fit my prejudices? When I first read ‘The Spandrels of San Marco’ as a postgraduate student, I was bowled over. Here were two great evolutionary biologists challenging many of the tenets of evolution that I had been taught at university, in particular the centrality of natural selection and adaptation in explaining variations among organisms. It was science and it made perfect sense, but it was also subversive. As a proud leftie, I loved it. I read more Gould, Lewontin and other like-minded biologists, and became increasingly convinced by their arguments. The role of change and contingency in evolution seemed to free us from a tightly deterministic reading of biology. History, to Gould and Lewontin, was an open-ended, unpredictable process. However, I was conscious that the scientific basis of this view was highly controversial. I realised just how controversial when I first read the New Scientist review of the book Not in Our Genes (1984) by Lewontin, Steven Rose and Leon Kamin. Dawkins wrote the review, and it was devastating — Gould and Lewontin’s ‘adaptationist paradigm’ was a straw man, and I had to concede that Dawkins was fairly on-the-button when he described Not in Our Genes as a ‘sort of scientific Dave Spart trying to get into “Pseud’s Corner”’. Even admirers of Gould and Lewontin lamented their views on natural selection. The biologist Jerry Coyne, a former student of Lewontin’s, wrote that: When I was at Harvard with Dick [Lewontin] and Steve [Gould], it was almost as though selection was a forbidden topic — just once I would have liked either of them to have admitted openly, ‘Yes, of courseselection is the only plausible explanation for adaptations.’ In their fight against unthinking adaptationism, they nearly threw the baby out with the bathwater.Coyne had a point. Dawkins had a point. But I still felt that Gould and Lewontin were on to something. I was confused. As it turned out, my research would throw me right back into this mess. Founder effects fit perfectly into Gould’s point of view (and he had argued for them in his own research on land snails). It’s all there: random chance and blind luck versus survival of the fittest and the evolutionary arms races of Dawkins et al. And Mayr’s original work on ‘genetic revolutions’ had clearly struck a chord with Gould. Genetic revolutions were central to the theory of punctuated equilibria, in which Gould, along with the biologist Niles Eldredge, argued that the tree of life does not grow gradually and smoothly: rather, species remain unchanged for long periods of time punctuated by brief, dramatic periods of change. Here, as with much of Gould’s work, were striking parallels between scientific and political debates. A society characterised by steady change will oppose major transformations such as revolution. Punctuated equilibria, on the other hand, fitted in with the political theory of human societies described by the likes of Marx, Engels and Hegel. As soon as I heard the Maynard Smith interview, all the conflict and confusion I had felt when I first read about the sociobiology debate — Gould, Lewontin, punctuated equilibria, politics — all that I’d stored away and half-forgotten, came flooding straight back. Here I was, a left-wing scientist, with a scientific narrative that mirrored my political views. Had I, somehow, skewed my interpretation of pipit variation to fit my prejudices? Worse, had I subconsciously skewed the results? I checked and double-checked, and found the same thing. I tried finding new ways of looking at my data, but still I came to the same conclusion. The founder effects looked real. If I had done something wrong, I couldn’t figure out what it was. Political beliefs affect science at many levels, from decisions on what research is funded, to the subconscious biases of individual scientists. And for my part, I am sure that my political views have influenced my scientific research, and all along I haven’t had a clue. We constantly make subjective decisions as scientists: which questions get us fired up, which do we ignore, when do we consider a result significant enough to publish, how do we approach an analysis, and how do we interpret our findings. We strive for objectivity, but we can never truly achieve it. Instead we can but hope that the self-correcting process of science weeds out the rubbish, and that truth emerges over time. So maybe radical scientists are not such a bad thing after all. Perhaps the likes of Gould and Lewontin, who are able to take a step back and look critically at their whole field, play an essential role in keeping science in check, and therefore in moving it forward. They might have overstepped the mark at times, but their critique of adaptationism was one that needed to be made, and is one that has improved the scientific rigour of evolutionary biology overall. Biologists are now much more careful of inventing adaptive explanations for everything they see, and are more amenable to non-adaptive explanations. As for my paper on pipits, I’m at the nerve-racking stage of submitting it for peer-review. After checking and double-checking I can only conclude for now that the founder effects were real, and hope that the peer-review and, more importantly, post-publication scrutiny of fellow scientists will ferret out any problems. Perhaps the best plan will be to find a capitalist lapdog to review it for me. | Lewis Spurgin | https://aeon.co//essays/how-much-does-evolution-depend-on-chance | |
Ethics | Our genes and our culture tell us to live as long as possible – even when living is misery. Should we listen? | My last remaining grandparent is 97 years old. She has dementia, can barely hear, has no real friends, and spends most of her waking hours shouting ‘Garbage!’ at the TV that sits in the corner of her small room in an assisted living home. It’s not all grim. My brother and I love her, and she gets to see her daughter, my mother, for a few hours every week. The highlight of these visits is when my grandmother re-reads an eight-page autobiography that she wrote years ago. Because she can’t remember reading it the previous week, and only faintly recalls her own history, it never fails to fascinate her. Unfortunately, there is little else in her life that we could call enjoyable. Yet when my mother asked my grandmother if she was happy to be alive (yelling the question directly in her ear), she replied: ‘Of course! Everyone wants to live as long as possible.’ Well, not everyone. There would be no ‘right to die’ movement if chronic illness didn’t render some lives physically intolerable. But mostly she’s right — no matter how dismal our lives look to outsiders, most of us try to stave off the grim reaper as long as we can. And from an evolutionary perspective, this makes a lot of sense. The human genome wouldn’t have amounted to much if it had left us indifferent to our own survival. We need to want to live. But this raises a troubling question. Is our lust for life nothing more than a biological scam, a bit of selfish-gene trickery that keeps us above ground so that we breed and care for offspring? And if so, should we keep marching along like the compliant grinning fools our genes take us to be, or should we resist? Of course, our genes cannot really take us for anything, nor can we resist them or even know what it would mean to do so. We are not dualistic beings who are part rationality and part biology; we can’t rebel against our biological selves any more than we can leap out of our own skeletons to stretch our legs and get some fresh air. Nevertheless, it is possible for us to ponder nature’s general workings and sabotage the evolutionary bottom line. We already do this with birth control, which allows us to reap the benefits of reproduction-incentivising pleasure without having to endure the burdens of reproduction. If we’re not opposed to hacking biology, why not subvert our desire to live as long as possible, too? Clearly, we might have reasons for living longer that fall outside of self-interest. We might want to enrich the lives of our family and friends, or see our life’s work through to completion. But is staying alive ever a smart choice for us on purely selfish, non-reproductive, grounds? Can the aversion to death that whips us onward withstand rational self-reflection? Or is it merely the Machiavellian machinations of our naturally selected genes, in a conspiracy with the pro-life cultures, traditions and ideologies that grew out of an unquestioning obedience to these survival urges? The answer depends on what happens after we die. Most religions hold that our consciousness and identity persist after death, in one otherworldly realm or another, but it’s rarely the same one. Indeed, there are a wide variety of possible destinations for those who believe in an afterlife, most of them assigned according to our behaviour during life. Some of these places are better than life as we know it, but some are worse. If there is an afterlife, whether you want to live a long life or a short one would depend to some extent on whether your afterlife would be an upgrade or a demotion. Those who believe that they are going to hell, or even somewhere just slightly less agreeable than Earth, would be wise to eat right, exercise and look both ways before crossing the street. To those bound for a better place, on the other hand, an early demise has something to recommend it. There are a few living authors who claim to have visited heaven, but judging from their cheerful media appearances, that first hit of eternal bliss hasn’t left them overly desperate for another. Heaven is no heroin, apparently. The story of Colton Burpo, the four-year-old boy whose trip to the great beyond inspired his father’s book Heaven is for Real (2010), reassures us that our souls resemble ourselves in our 20s and 30s, thereby undercutting another reason to long for a premature end. There’s no need to die young in order to look your best for the pearly gates, if strolling in as your ideal self is part of the deal. In any case, the longest conceivable human life is so minute compared with the unfathomable enormity of forever that, if there is an eternal afterlife, any mortal span will vanish into statistical insignificance by comparison. The same goes for the hellbound: an extra few decades of mortal life isn’t much of a buffer against an eternity of lava baths and acid showers. If there is any risk of being reincarnated as a dissatisfied pig, you’d be wise to cling to human existence for as long as you can Length of life can still matter, but only if our actions in life determine our eternal destination. If getting to heaven is a matter of having certain beliefs or performing certain deeds, it would be devastating to expire before you managed to meet those requirements. On the other hand, if you start off with the right beliefs and then lose your faith — or rack up a few unforgivable sins in your final hours — you would have eternity to wish you had died a little younger, before you’d slipped up. Reincarnation addresses some of the problems of a more straightforward afterlife. It overcomes the monotony of eternity by slicing it into an endless array of new bodies and existences, with the same soul or consciousness (but not necessarily memory) threading them all together. The trouble is, not everyone agrees about how new incarnations are assigned, and that makes reincarnation one of the trickiest afterlives to game. If our next lives are assigned randomly, then determining whether it makes sense to stay in any particular life would be a matter of gauging the world’s average life quality and figuring the odds that you will have better luck next time. Rebirth rewards pickiness; it makes sense to rush through any dull or deprived lives, but when you luck out with a cinematic life, the sort of life that has you deftly making hard choices, winning hearts and being knighted before the end, you had better stay on through the outtakes and pray that there’s a sequel. But what if our initial life circumstances aren’t all down to chance? In Reincarnation and the Law of Karma (1908), William Walker Atkinson, a pioneer of the New Thought movement, says that our previous life experiences influence our future lives, though not necessarily through divine punishment or reward. Atkinson discouraged a moralistic approach to reincarnation, but even so, he couldn’t definitively disprove the popular belief that working hard and being kind to others would give us a better life in the future. If that’s the case, then casual self-destruction to escape a less-than-ideal reincarnation might only perpetuate the downward spiral. A smarter strategy would be to quickly learn how to avoid harming others, then join a heroic, risky profession and hope for the best. Of course, there are drawbacks to dying young in a reincarnation-based world. The quicker you die, the more time you spend repeating the early stages of life, not to mention any between-incarnation resting periods or soul-scrubbings. For example, if you found adolescence excruciating, you would want to prolong even your most dreadful incarnations, in order to put some space between childhoods. On the other hand, if you loved school and the thrills of young adulthood and never cared much for maturity, you should aim for a rock-and-roll death in your 20s every single time. The possibility of being reborn as a member of a different species complicates things further. No matter how you would answer the question ‘Is it better to be a pig satisfied or a human dissatisfied?’ (I personally think there’s a case for the satisfied pig) the fact is that, with billions of animals being factory-farmed around the globe, most of us are lucky to be humans. If there is any risk of being reincarnated as a dissatisfied pig, you’d be wise to cling to human existence for as long as you can. And while you’re at it, maybe you should also protest factory farming to juice your karma. The dead aren’t stuck in a limbo fretting over the books they could have written or the people they could have loved In the 1880s Friedrich Nietzsche revitalised a perverse twist on reincarnation with his take on the doctrine of eternal recurrence, the notion that we all have one life to live, but we repeat every moment of it in an unremitting, infernal loop, each run feeling like the first. In The Gay Science (1882), a sadistic smirking demon explains the concept: This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it, but every pain and every joy and every thought and sigh and everything unutterably small or great in your life will have to return to you, all in the same succession and sequence — even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, speck of dust! If this is indeed what’s in store, then ‘Live fast and die young’ could be prudent advice. If your life was short, but looped, you’d be eternally young, a condition that is much sought-after. But not everyone can count on perpetually recurrent golden years. Planning for tomorrow and saving the best for last could backfire if you die suddenly, leaving all your best years laboured for and dreamed about, but never lived — always in mind but eternally out of reach. Finally, there is annihilation to consider. Suppose it’s true that our souls don’t float off to another dimension, and we live this life only once. When we die, our subjectivity shuts down altogether. The world marches on, but not us. If all that we are is obliterated at death, our current lives are all we’ll ever have. This might seem to make a longer life more important, but the truth is just the opposite. If annihilation is our fate, the selfish reasons for living longer become irrelevant because death renders lifespan moot. The ancient philosophers Epicurus and Lucretius are the most famous proponents of this position. For them, death was nothing to fear, because fear is a subjective experience — which is exactly what annihilation ends. Sure, you lose everything at death, but conveniently that also includes your perception of loss. In De rerum natura (50 BC), Lucretius rebukes those who mourn in anticipation all the so-called goods of life that death forever steals: No more for you the welcome of a joyful home and a good wife. No more will your children run to snatch the first kiss, and move your heart with unspoken delight. No more will you be able to protect the success of your affairs and your dependents. ‘Unhappy man,’ they say, ‘unhappily robbed by a single hateful day of all those rewards of life.’ What they fail to add is: ‘Nor does any yearning for those things remain in you.’ If they properly saw this with their mind, and followed it up in their words, they would unshackle themselves of great anguish and fear. A thought experiment might help to illustrate this view. Suppose you win a free vacation, with one catch: as soon it’s over, the Vacation Awarding Committee erases the trip from your mind and reverses every other physical consequence of it. That includes the ravages of stress from sitting near a screaming toddler on the plane, the immune-bolstering benefits of relaxing in the spa, the money you won at the casino, and the epiphany you had about quitting your job and opening a scuba-dive shop. Even the length of the vacation has no lingering impact: no matter how long or short it was as you experienced it, once it’s over, you’re no older or younger than you were before you took the trip, and everything else in your life is just how you left it. Regardless of how this vacation goes, the moment you get back, everything will be as if it never happened at all. If adulthood means putting away childish things, annihilation means putting away all things So once the trip ends — as it inevitably must — nothing about how it went makes a single lasting difference whatsoever. Does it matter to you, then, whether it was long or short? Annihilation at death is just like the end of this imaginary vacation. It erases our memories, along with everything else that happened to us during our lives. The dead aren’t stuck in a limbo fretting over the books they could have written or the people they could have loved. You cannot hurt, inconvenience or otherwise disadvantage someone who doesn’t exist. Indeed, if the annihilation story is right, nonexistence is all that death can bring us. If we don’t maintain any perception or knowledge of how our lives went or what happens afterwards, the only thing that can matter is the way our lives and deaths affect those who survive us. The vacation hypothetical is an attempt to approximate the impossible perspective of the non-existent. In The Metaphysics of Death (1993) — an essay collection devoted to this question of when death is a harm — the moral philosophers Bernard Williams and Fred Feldman each contribute essays that challenge Epicureans to judge the importance of lifespan from the perspective of the living instead of the dead. If Williams and Feldman considered my vacation-with-no-lasting-impact hypothetical, they would turn my attention towards the vacation itself. However I feel, or don’t feel, about it afterwards, I’d obviously prefer to have a pleasant life/vacation while it’s occurring. And from here it’s not much of a stretch to suggest that we would want our vacation/life to extend as far into the future as possible, too. But for this as-it-occurs variation on the vacation thought experiment to remain analogous to life and death, it needs to be altered. After all, you might want the vacation to persist only in order to avoid returning to a dreary real world. If ending the vacation means it’s back to the office and the Tube, this is no longer equivalent to annihilation; it’s more like plummeting from heaven to hell. For it to resemble annihilation, we’d have to add that when the vacation is over and all traces of it are removed, you would no longer desire to experience any of the pleasures it offered, nor would you ever suffer from their lack. This should make it easier for some of us to accept the vacation’s eventual end. Does it mean that we’re now all indifferent to whether our lives are relatively short or long? Probably not. But all that proves is the impossibility of fully empathising with a perspective we don’t have, including our own future attitudes. When philosophers cry about the treasures of life we lose when we die, they do so from the perspectives of those who are attached to the intellectual, artistic, familial, productive and hedonistic pleasures of life. Annihilation is a radical, perhaps incomprehensible break from these attachments. If adulthood means putting away childish things, annihilation means putting away all things, and this is indeed a shocking change. But it’s a change to which all of us will seamlessly adapt. When we fear an annihilation death, we are like infants who fear adulthood because they won’t get to breastfeed anymore. There might nevertheless remain a nagging sense that we selfishly want to live longer to experience more, to neatly wrap up our life stories and to see what will happen in the world. But no matter how long we live, annihilation undermines it all. Even if we last to 125 and finally get to see affordable, mass-produced hover boards, we’ll all forget about them and everything else the moment we die, making it just like we never got to see anything at all — which kind of defeats the purpose of living longer in order to see more. Even if pleasure were valuable in and of itself, and not just as a rough and imperfect guide towards fitness-enhancing behaviour (as the theory of evolution holds), any extra praemia vitae we capture by living longer will ultimately — if annihilation is true — do us no more good than the bathtub of pennies I dreamed about last night. That’s not to say that there are no reasons for us to want to stay alive — survivors will be sad when we’re gone, and an early death could deprive the world of whatever positive things we might have contributed given more time — but if we assume annihilation, none of the selfish reasons to stay alive make sense in the end. So, please do keep living. Just don’t be so certain that you’re doing it for yourself. | Rhys Southan | https://aeon.co//essays/is-a-long-life-necessarily-a-good-life | |
Death | When someone close to you dies, the very fabric of your life is ripped to shreds. Is philosophy any consolation? | I’m dealing with the death of my father the way I deal with most things: by thinking, and processing those thoughts through writing, fingers to keyboard. Given my philosophical bent, these thoughts wander from his particular death to mortality in general. That might strike you as cold, excessively rational, analytic. But the only rule about grief is that there are no rules. Reactions to death cannot be neatly divided between the normal or abnormal, appropriate and inappropriate, right and wrong. We muddle through death as we muddle through life, each scrambling in the dark for a way through. At times like these, philosophers are of limited use because when they have talked about dying they have tended to focus on what it means for the one who dies. Plato, for instance, called philosophy a preparation for death, while Epicurus told us we had nothing to fear from dying. But such thoughts are not much use to those who die suddenly. My father had seemed fit as a fiddle, but he was struck by a heart attack and died on the spot. The same happened to his brother and his brother-in-law, while his own father was killed instantly by a stroke. It is as though the Grim Reaper enjoys playing a cruel joke on those who look intently ahead. Those who prepare to meet him face-to-face are just as likely to find he sneaks up behind them and takes them unawares. A much more useful philosophy would help us to prepare for the deaths of others. I have never been sure that philosophy does a good job of that. But perhaps a philosophical outlook can help us make sense of death when it comes close to us. It seems to me there are three dimensions to this: what the death means for the one who has died; what it means for those who survive; and, perhaps most of all, the sheer shock and surprise to find death not knocking at the door but crashing through it, uninvited. If death comes at the end of a long illness, most would say it is easier to face, even if nothing can fully prepare you for the day when it finally comes. However, when the death is sudden, almost everyone talks the language of incredulity. ‘It doesn’t seem possible.’ ‘It doesn’t seem real.’ ‘I still can’t believe it.’ I’ve heard phrases such as these again and again over recent days. My father had not lived in his native Italy for years but he was back for one of his periodic visits, and on this trip he seems to have been exceptionally sociable. The number of people who had chatted to him for the first time in years in the days before his death is extraordinary. And they all said that he seemed fit, well, healthy. Those who knew him longer talked about how they assumed this slim cyclist and vegetarian, who was a strictly social drinker and smoker, would outlive his generally stouter, less careful, more sedentary peers. I cannot share their feelings of impossibility. Yes, there are elements of unreality about it all but, for as long as I can remember, I have been more or less constantly aware that death can come to anyone at any time. So I’m genuinely not surprised when occasionally it does, even when it’s to a member of my family. Of course, some people will have a longer life than others, but I know that the rule of chance defeats the law of averages for any given person. So what surprises me is not that people die or get sick, but that other people continue to be so surprised when it happens. Am I unusual in this because I have devoted so much of my life to philosophy? I suspect the causal arrow is the other way round: I have devoted so much of my life to philosophy because I am unusual in this. After all, it is not as though the basic insight depends on a close reading of the Stoics, Socrates or Schopenhauer. I am talking about the facts about death that everyone knows, but cannot necessarily accept. All that it takes to ingrain them is the philosophical habit of turning easily understood ideas into the more difficult practice of how you perceive the world day by day. Cultural norms have their part to play, too. In the village where my father died and was interred, there is no hiding from death when it comes, no disguising what has happened. Every passing is marked by posters announcing the time and place of the funeral, so wherever you go there is always some reminder of death’s omnipresence. Nor are they coy about corpses. My father was laid out in a coffin in his brothers’ house, a glass lid allowing him to be seen, and his body kept cool by a refrigeration unit. People popped in and out all day to see him, often talking over his body. Everything here seemed designed to make you as aware as possible of the reality of death and its natural place in the life cycle The ceremonial niceties, such as the huge free-standing electric candle sticks that were placed either side of his coffin, contrasted with the utilitarian brutalities of the thing. The undertakers, for instance, sealed up the metal lining of his coffin by putting on the lid, getting out a fiery soldering iron and welding him in. At the graveyard, he was interred into a wall, and after the priest had given a last splash of his holy water, two men in grubby clothes set to work bricking him up. It’s very different in Britain, where bodies are whisked away to mortuaries and seeing them is something you can avoid altogether without censure. The Italian way — at least the one I saw — has to be better. Everything here seemed designed to make you as aware as possible of the reality of death and its natural place in the life cycle. But such is the human desire to avoid facing mortality that even in Italy people seem genuinely shocked to see another one bite the dust. If anyone does look at the empty cubicles in the cemetery wall and thinks ‘One of these has my name on it,’ that thought seems to be swiftly buried with the body. These strange facts persuade me that the shock of death can be diminished by how we think about it, if we think about it as a reality and not just an abstract idea. If we really do take on board the fickle nature of fate, the inevitability of death and the randomness of its timing, then although there might be other things about a death that leave us devastated, the mere surprise and shock of it need not be one of them. There are, of course, plenty of other things about a death to get upset about, most obviously our sadness for the person who has died. However, philosophers have struggled to make sense of this and, as a result, have often concluded that there is simply nothing to be concerned about. The person has died. He cannot suffer in any way. There is no point in feeling sorry about what he might have missed out on because there is no longer anyone there to feel sorry for. The only people who can feel any pain are those who survive. I think there’s something deeply wrong about this. The sadness that one feels for the deceased is not that he is, in a strange way, still around but unable to appreciate life, but rather that he is no longer around at all. He is not suffering but nor is he enjoying, savouring, loving, laughing, or appreciating either. That is the cause of our sadness, for him or, perhaps more accurately, for what the deceased could still have been. Many philosophers have been baffled by this, protesting that it is no more rational to feel sad for the unexperienced joys of the deceased than it is for those of the never born. But there is a huge difference between the time two people could have spent together in the real world were it not for an accident, say, and the time two people who had never been born could have spent together in a parallel, imaginary universe. The former did not come to pass when it very nearly could have, while the latter is just one of an infinite number of counterfactual possibilities. It takes a curiously impersonal perspective to assign the same value to both the unrealised experiences of purely hypothetical beings and those of people who lived and breathed. If we can delight in someone’s company, or even just derive enjoyment from a glass of good wine, then there is nothing irrational about feeling sad, perhaps painfully so, that someone we know who would have taken equal pleasure did not have the chance to do so. At such times, the poetry of paradox comes closer than the deracinated prose of consistency Philosophers flounder when trying to analyse grief because the phenomenon they are trying to capture is indeed complex and paradoxical. To see a corpse in a coffin, as dozens of people did at my uncles’ house, is to be confronted by something that is, and is not, someone you loved, and love. People often say of a corpse that it looks like the person is sleeping. I can’t agree. In every living thing, asleep or awake, there are small movements, subtle signs of vitality. The face of my father was completely lifeless, not a single muscle was flexed, his chest neither rose nor fell with his breathing. He was gone. And yet in a sense this was also clearly him, what remained of him. His still being there in that state was the clearest evidence that he was no longer there. You feel for him knowing that he feels nothing, he is now nothing. And that is precisely the source of your sadness for him. To a strict logician, this might seem incoherent. But even logicians must accept that there are times when an inconsistent description comes closer to the truth than our best attempt at consistency. The language and logic of philosophy are sometimes inadequate to capture some of the most important and real phenomena of life. At such times, the poetry of paradox comes closer than the deracinated prose of consistency. In a conflict between our best rational account and the profoundest felt experience, we should be careful of awarding victory to the rational unless we are sure it has delivered a knockout blow. Sadness for the days not lived is appropriate even when a person is old and has had a long, good life. Life is never long enough. We almost all decline before we have exhausted our capacity to suck the marrow out of life. I accept that mortality is necessary for life to have meaning, and that eternal life would be a sentence, not a reward. But I cannot accept that 80 years (if you’re lucky) of diminishing potency is enough, and that we should not feel sad that we – and the people we love – don’t have longer. Despite their failings, it might still seem that the philosophers who preached placid acceptance of death were not too far wrong. The existential shock can be countered and, although sadness for the dead might be deep, it need not feel like the punch in the gut that typically characterises bereavement. But there is a third aspect of grieving that is not so easily ameliorated. When you lose someone very close to you, the very fabric of your life is ripped to shreds. The most common, and accurate, way of describing this is as a loss of a part of yourself. This is more than just a metaphor. When someone is close to you, his way of thinking, his thoughts and his biography become inextricably linked with yours. Where you end and he begins is not clear. No one would think it controversial to say that when you lose such a companion, then a part of your life is lost too. If, as I and many wiser minds have concluded, you are the sum of your experiences, thoughts, projects, plans and so on, then to lose someone who is such a big part of these things is indeed to lose a part of yourself. Perhaps this is a better way to understand sayings such as ‘It doesn’t seem real’ or ‘I can’t believe it.’ Experiencing the death of someone close to us, we have a sense that the world has been so transformed, so disfigured, that it is no longer familiar: we don’t know how to be in it. We clumsily say we can’t believe it. What we really mean is that we cannot conceive what it even means to be ourselves any more. This can be the case even when the person is no longer a regular part of one’s daily life, but has been a constant background presence for as long as you can remember. And yet, if the hardest thing is not that the other person’s life has ended but that our own has been ripped to shreds, then has grief become a deeply selfish thing? I don’t think so. The phenomenology of grief means that we cannot draw any simple division between self and other. We feel confused — are we crying for ourselves or for the deceased? — because our feelings for ourselves and the person we love can’t be neatly taken apart, just as we cannot neatly take ourselves apart from those to whom we are closest. Rather than being a purely selfish thought, the idea that someone was so loved that he became a part of you is the most profound form of appreciation possible. It’s perhaps also why death is so often marked by regret, which is not about just you or the one you mourn, but for the shared opportunities lost to you both. In a subtler way, the interpenetration of self and other extends to everyone whose lives we have touched. For me, the most affecting part of my father’s funeral was when I entered the church to find it packed, standing-room only. It was not that he had dozens and dozens of very close friends. It was that, over the course of his life, he had become part of the lives of many people in his village, usually in small ways that were nonetheless significant enough to motivate them to come out to remember him. This community had lost a part of itself, and they were grieving for that. As we live more and more atomised lives, perhaps this aspect of grief will diminish in importance. There will be plenty of free seats at my funeral, for sure. When the fabric of our selves is torn apart, we will increasingly find it made of a smaller and smaller cloth. Grief will become more limited but also more private, and perhaps therefore harder to bear. There is one more way in which philosophy can change the way we react to death. Aristotle said that you could not describe a life as good until it was over. What has gone well so far could still end in a catastrophe that would negate all that has gone before, while a hitherto awful life could yet be redeemed. I have seen the full stop of death, closing the final chapter of a life, making it possible to stand back, look at the whole, and say that it was good. Of course, any life story is littered with mistakes, bad times and failures, as well as successes. But in the case of my father, and of some others I have known who have died in recent years, there has been some comfort in the knowledge that the overall story was a good one. Maybe there were some decent chapters that still might have been written, but there could equally have been a cruel twist or two in the tale that would have led to a less happy ending. For the protagonist, better a good short novel than a tragic epic. There is nothing automatically soothing about this, of course. The reaper can, and often does, choose to type ‘The End’ after pages of misery, without bothering to bring any resolution. The last full stop that allows the ‘life well lived’ to be appreciated can also expose the life gone badly for all the horror that it was. That is just one reason why secular humanists should not overstate the extent to which a good, happy, moral life is possible without God. Of course it is. But bad and unhappy lives are also possible, and all too common. Philosophy provides little consolation for these, other than the knowledge that the pain is over. I am presupposing here that death is indeed the end, as I have throughout this essay. But much of what I have described will also resonate with those who believe in a life to come. People often say there are no atheists in foxholes: that in the face of death everyone clings on to some transcendental hope. In my experience that is simply not true, and even if it were, I always reply that there do not seem to be too many theists at funerals, either. The way that the bereaved behave suggests to me that they do not feel any confidence that their loss is a mere temporary inconvenience. My father believed that there was something more to come after this life and so, as he often did, he would have disagreed with much of what I have written here, while approving of the sincere attempt to work out the truth for myself, as he had done for himself. At the same time, his passing is a reminder that no one gets it all straight, and that even the best philosophy in the world can’t save us from ultimate extinction, likely in a state far from enlightenment. It has to be enough to have lived well and to have played a part in the lives of others too, and the only philosophy worth its salt is the kind that helps us to do just that. However we grieve, after the tomb is sealed, the ashes scattered, or the coffin buried, all we can do is get on with trying to make sure we write the best chapters of our own lives that we can, while contributing some good lines and passages to those of others. We can’t guarantee that the great editor of fate won’t ruin it by inserting an ugly ending. But we can give the bastard as little help as possible. | Julian Baggini | https://aeon.co//essays/would-philosophy-help-me-to-deal-with-my-fathers-death | |
Evolution | Evolution has changed all we know about how humans behave, compete and co-operate. When will economics catch up? | In 2008, as it was becoming clear that a once-in-a-generation financial crisis was upon us, a friend of mine who is a senior corporate executive asked me a peculiar question. Might evolutionary theory have something to say about what caused the crisis? Those of us who labour away in the biological sciences are unaccustomed to fielding questions from corporate executives, but I had founded a think tank called the Evolution Institute and my friend was an early supporter. These were desperate times; the financial crisis had exposed grave weaknesses in our basic understanding of economic systems. The reigning theoretical paradigms in economics had run out of credibility, having, at best, failed to predict the crisis and, at worst, helped to exacerbate it. Could evolutionary theory do better? Of course, economics has been crying out for interdisciplinary intervention since its inception. The field is caught in a tug-of-war between two ideas: the idea that we need market processes to proceed unhindered and the idea that a healthy economy requires regulation. The 18th-century pioneer of political economy Adam Smith observed that economies have a way of running themselves when left to their own devices. Without the meddling of overseers, Smith argued, a benevolent ‘invisible hand’ would emerge from the workings of the market itself. Yet Smith also knew that naked self-interest is often very bad for society as a whole. The industrial revolution and the great depression would demonstrate this danger best. Communism demonstrated the opposite danger, that too much regulation dooms an economy to stagnation. What the economic landscape lacks is an adequate theory to navigate the enormous middle ground between these two insights. Instead, policy is drawn from a hodge-podge of perspectives pulled from philosophy, the social sciences, and practical experience. Some thought a formal mathematical theory could fill this yawning theoretical void. In the late 19th century, the French mathematical economist Léon Walras aspired to invent a physics of social behaviour that would be comparable to Isaac Newton’s laws of motion. If the behaviour of human actors in an economic system could be explained with the same precision as Newtonian mechanics, it would be an achievement of the first rank. In 1874, Walras devised just such a theory, which became known as the general equilibrium model, but it was fatally flawed. His model made so many assumptions about human preferences and abilities that it required economists to think about humans too restrictively. Walras’s theory was based upon an imaginary creature that is often labeled Homo economicus, rather than the complex, flesh-and-blood Homo sapiens. The model also required restrictive assumptions about the environment inhabited by H economicus. This complicated, assumption-heavy theoretical apparatus allowed Walras to posit mathematical proof of the invisible-hand conjecture. Individuals striving to maximise their absolute utilities would also maximise the utility of the society as a whole — without any regulation at all. The project of devising a ‘physics of social behaviour’ was doomed from the start The flaws of the general equilibrium model are well-documented. In his essay ‘On the Definition of Political Economy’ (1844), John Stuart Mill described H economicus as ‘an arbitrary definition of man, a being who inevitably does that by which he may obtain the greatest amount of necessaries, conveniences, and luxuries, with the smallest quantity of labour and physical self-denial with which they can be obtained’. In his essay ‘Why Is Economics not an Evolutionary Science’ (1898), Thorstein Veblen lampooned H economicus as ‘a lightning calculator of pleasures and pains, who oscillates like a homogenous globule of desire of happiness under the impulse of stimuli that shift about the area, but leave him intact’. Even back then, Veblen regarded this conception of human nature as ‘some generations’ out of date. Most social scientists agree that H economicus bears almost no relation to H. sapiens, and yet, to this day, the general equilibrium model enjoys a dominant position in economic thought and policy. In the classic essay ‘The Methodology of Positive Economics’ (1953), Milton Friedman confidently assured his readers that the predictions of the orthodox model could be correct even if its assumptions were wrong. Walras’s theory prescribed an extreme laissez-faire approach to economic policy, and gave license to Friedman to argue, tirelessly, that just about anything would work better if government got out of the way — and presidents and prime ministers listened. The current economic paradigm owes its dominance in part to its prestige as a formal mathematical theory. Everything else in economics seems like a mish-mash of ideas by comparison. The strongest challenge to the dominant model comes from behavioural economists, who call for economic theory and policy based on H. sapiens, not H economicus. But, so far, behavioural economists have merely compiled a list of ‘anomalies’ and ‘paradoxes’ that are anomalous and paradoxical only against the background of the general equilibrium model, like satellites that cannot escape the orbit of their mother planet. They have not put forth a general theory of their own. Evolution might have a role to play in filling this theoretical vacuum but, first, it’s important to acknowledge that evolutionary theory is not at all like Newtonian physics. Newton could provide a complete mathematical description for the movement of physical bodies because their properties and interactions are relatively simple. When interactions become more complex, our ability to describe them mathematically breaks down. You can see this dynamic at play in complicated, non-living systems such as the weather, which can be very difficult to predict. But it is even more the case in biological systems or economic systems, which are not only complex but change their properties and interactions over time. No matter how alluring to the 19th-century imagination, the project of devising a ‘physics of social behaviour’ was doomed from the start. But that’s OK; a theory needn’t resemble Newtonian mechanics to be successful. Indeed, evolutionary theory achieves its generality in a very different way. Evolutionists have a conceptual toolkit that can be applied to the study of any aspect of any organism. This includes asking four questions in parallel, concerning the function, history, physical mechanism, and development of the trait. For example, species that live in the desert are typically sandy-coloured. How do we go about explaining this fact? First they are sandy-coloured to avoid detection by their predators and prey (a functional explanation). Second, the sandy colouration is achieved by various physical mechanisms, depending upon the species — fur in mammals, chitin in insects, feathers in birds (a physical explanation). What is more, the particular mechanism is based in part on the lineage of the species (an historical explanation) and develops during the lifetime of the organism by a variety of pathways (a developmental explanation). Answering these four questions results in a fully rounded understanding of colouration in desert species. All branches of biology are unified by this approach. What’s good for my clan is not necessarily good for my nation. What’s good for my nation is not necessarily good for the global environment or economy This kind of thinking might seem far removed from economics and public policy, but it can be applied to core economic concepts, especially when we remember that evolutionary theory includes the study of cultural evolution in addition to genetic evolution. The evolutionary paradigm challenges assumptions that are so deeply embedded in orthodox economic theory that they aren’t even recognised as assumptions. For example, the general equilibrium model assumes that individuals strive to maximise their absolute utilities, but by contrast natural selection is based on relative fitness. It doesn’t matter how well an organism survives and reproduces in absolute terms. It only matters how well it does relative to organisms that employ alternative strategies. The traits that maximise the advantage of an individual, relative to the members of its group, are typically different from the traits required for the group to function as a co-ordinated unit to achieve shared goals. What’s good for me is not necessarily good for my family. What’s good for my family is not necessarily good for my clan. What’s good for my clan is not necessarily good for my nation. What’s good for my nation is not necessarily good for the global environment or economy. At every rung of this multi-tier hierarchy, self-serving behaviours threaten to undermine the performance of the higher-level unit. This potential for conflict, which I call ‘The Iron Law of Multilevel Selection’, lies at the heart of all theories of social evolution, and it poses a difficult problem for the ‘invisible hand’. If special conditions are required for higher-level functional organisation to evolve, how can one seriously maintain the notion that unregulated self-interest inevitably contributes to the common good? And yet, evolutionary theory does lead to a viable concept of the invisible hand, albeit one that is different from the received economic version. Indeed, the biological world has its own version of the invisible hand. Cells, multicellular organisms, and social insect colonies are all higher-level social units that function with exquisite precision without the lower-level units having the welfare of the higher-level units in mind. In most cases, the lower-level units don’t even have minds in the human sense of the word. These miracles of spontaneous organisation exist because selection operating on the higher-level units has winnowed down the small fraction of traits in the lower-level units that contribute to the good of the group. If the invisible hand operates in human groups, it is due to a similar history of selection, first at the level of small-scale groups during our genetic evolution, and then at the level of larger-scale groups during our cultural evolution. Multi-level cultural evolution is still taking place all around us, as we can see if we learn to look closely. The European Union, for example, is a case of lower-level social entities (nations) struggling to form a higher-level social organisation (the EU). Economists and public policy experts can be forgiven for being wary of evolution as a theoretical framework, given its sorry history during the 19th and early 20th centuries. Even today, most people associate the term ‘social Darwinism’ with a cruel dog-eat-dog view of the marketplace (biological reality check: dogs don’t eat other dogs). But, today, a new kind of social Darwinism is emerging, and it actually favours co-operation. When Elinor Ostrom was announced as one of the recipients of the Nobel Prize in Economics in 2009, many members of the economic establishment were dumbfounded. Steve Levitt wrote on his New York Times Freakonomics blog that most economists had never heard of her or her work – he was chagrinned to count himself among them. Ostrom was a political scientist by training and (by her own account) an outlier even among political scientists. She received the prize for showing that groups of people who attempt to manage common-pool resources such as irrigation systems, forests, and fisheries, are capable of avoiding the tragedy of overuse, but only if they are able to regulate themselves through possessing certain design features. This was in contrast to received economic wisdom, which held that the only solutions to the ‘tragedy of the commons’ were privatisation (if possible) or top-down regulation. Ostrom established her claims with a worldwide empirical database of common-pool resource groups and with theories derived from political science, game theory, and (increasingly throughout her career) evolutionary theory. As part of the Evolution Institute’s multi-year project on rethinking economics, I was privileged to work with Lin and her postdoctoral associate Michael Cox for several years prior to her death last year. Our work showed that the core design principles of groups who successfully manage common resources followed the evolutionary dynamics of cooperation in all species and our own unique history as a highly cooperative species. We also argued that these principles can be generalised to a much wider range of groups than those attempting to manage common-pool resources. The evolutionary paradigm provides a means for steering an intelligent middle course between extreme laissez-faire and ham-fisted regulation In my experience, based on hundreds of conversations with economists and public policy experts of all stripes, most are open-minded and unthreatened by the notion that evolution could help to illuminate the mysteries of economics. Indeed, economists often assume that their ideas are consistent with evolutionary theory, even if they don’t formally apply it in their work. The question I am asked, again and again, is: ‘What is the added value of an explicitly evolutionary perspective, compared with the way that I and my colleagues have been approaching our subject on our own?’ Without evolution, I answer, economics has no way to reconcile its yin and yang, the importance of self-organising processes and the importance of regulation. The evolutionary paradigm provides a new set of navigational tools for steering an intelligent middle course between extreme laissez-faire and ham-fisted regulation that have proven so disastrous in the past. In pursuit of this new paradigm, the Evolution Institute has teamed up with the National Evolutionary Synthesis Centre, one of the National Science Foundation’s largest evolution-related centres, to hold a conference and series of workshops engaging dozens of experts from a melting pot of academic disciplines. The project has reached fruition in the publication this year of a special issue of the Journal of Economic and Behaviour Organisation entitled ‘Evolution as a General Theoretical Framework for Economics and Public Policy’. The articles in the special issue substantiate this claim that evolution can and should be used as a general theoretical framework for economics and public policy by addressing topics that have always been at the heart of economics and public policy, such as the efficacy of groups, the nature of institutions, self-organization, trust, discounting the future, and risk tolerance. The 13 articles in the special issue lay the groundwork for a paradigm shift that is long overdue. Of course, one reason to reserve judgment on the evolutionary paradigm is because it is so new. No theory immediately provides all the answers to the questions that plague its field, and no infant theory could possibly hope to explain a phenomenon as complex as the 2008 financial crisis. And yet, with more time and refinement, economists might find themselves agreeing with the biologist Thomas Huxley, who said, upon encountering Charles Darwin’s theory for the first time in 1859: ‘How stupid of me not to have thought of that!’ Visit the online evolution magazine This View of Life for a special issue on economics from an evolutionary perspective. Visit the Evolution Institute website for free downloads of the 13 articles comprising the special issue of the Journal of Economic Behavior and Organization. | David Sloan Wilson | https://aeon.co//essays/social-darwinism-is-back-but-this-time-it-s-a-good-thing | |
Life stages | I’m 43 years old now, damn it, and my life is amazing. So why am I comparing myself to some styled professional? | Today I have to go to the Department of Motor Vehicles to get my driver’s licence renewed. My current licence photo is 10 years old, so old that the carefree woman in the picture always takes me by surprise. Her hair looks unnaturally shiny. Her smile says, ‘I have nowhere in particular to be. Let’s go grab a cocktail!’ Today I have to say goodbye to that lighthearted girl, and welcome her older, more harried replacement. Today I have to stand in poorly marked lines with impatient strangers, reading signs about what we can and cannot do, what we should and should not expect. Last time I got my licence renewed, the first picture was so bad that the DMV guy laughed out loud. I was young and carefree then, so it didn’t bother me. ‘Show me,’ I commanded. He turned the screen around. My eyes were half-closed and my mouth was screwed up in a weird knot. Remember that scene in Election (1999) where they press pause just as Tracy Flick, the wannabe school president played by Reese Witherspoon, looks drunk and deranged? It was like that. The next photo turned out great, though, because I couldn’t stop smiling about the first. That’s not the mood I’m in today. Today, if the same thing happens, I’ll stew. They’ll take a second crappy photo of me and no one will be laughing. To them, I’ll be just another angry lady to tag and release back into the wild freeways of Los Angeles. When you visit the DMV, you realise that you can bestride the narrow world like a colossus for only so long — namely, until you’re about 39. After that, you’re not special anymore. You’re just another indistinct face in a sea of the nobodies. I have all of my father’s old driver’s licences. That’s the kind of thing you save when somebody dies — not their unpublished papers, not their shelves full of books, not their boxes of old photographs of girlfriends you never met before. You save the evidence of their trips to the DMV. Something about those little snapshots of my dad’s face, four years older, and then four years older again, standing up against that dark-red background they once used in North Carolina, slows my pulse a little and makes me find the nearest chair. My father was not one to smile for these photos. He did, however, open his eyes a little wider as the years went by, possibly to make himself look less old and grouchy. On 5 March 1973, he wore a red gingham shirt and matching red tie. He was about to turn 34. On 10 March 1981, he wore a V-neck sweater over a maroon shirt. He was about to turn 42, and he looked fitter than he was at age 34. On 14 March 1985, my father looked as tan as George Hamilton. On 13 March 1989, he was about to turn 50, and he took his glasses off before they took the picture, maybe so he would look younger. His face was more relaxed and open than it was in the other shots. In his last licence photo, taken on 15 March 1993, he had let his hair go grey, and he looked relaxed and happy. Two and a half years later, he went to bed feeling a little bit sick, and died in his sleep of his first heart attack. The fact of someone’s premature death shouldn’t make everything they ever did seem tragic, but it still does. I wish I were enlightened enough to have a more uplifting story at the ready when I shuffle through these laminated cards. I wish I didn’t feel quite so melancholy about his life, neatly sliced into four-year intervals, his face transforming from young to older to oldest. What was he feeling at each moment when the camera flashed in his face? What buried shame or sadness bubbled up, what bit of longing worked its way to the surface in the bleak light of that DMV office? When you glance from one licence to the next, you don’t see the long nights I spent tossing and turning My father talked a lot about not wanting to get old. He visited his parents regularly, but it often depressed him. He didn’t want to live the way they did, growing stooped and wrinkled, smoking and bickering as they circled the drain. He seemed to have an unusually strong fear of ageing and death. He was very fit, and he was always juggling three or more girlfriends at once, one of whom was usually under 30. Old age made him anxious. Twenty-odd years later, I realise that most people feel this way so strongly that they’re hesitant to say it out loud. We can’t quite believe that we’ll grow old, too. At a certain point, we start counting the years we might have left, if we’re lucky. We become more pragmatic. We take what we can get. We don’t need big signs to tell us what we should and should not expect. Ten years ago, when that last driver’s licence photo was taken, I was 33 years old and weighed 125 pounds. In the photo, my face is lean and tan because I went hiking every single morning. I worked from home and made good money as a freelance writer. I read a lot. I adopted house plants. I wrote songs on my guitar. I was so young, I had no idea how young I was. Heather Havrilesky at 33, photographed by the DMVBut before you go flipping between the 33-year-old, with her broad smile, and the 43-year-old, with her vague look of world-weariness, keep in mind all the things that happened in the 10 years in between: I dumped my boyfriend. I found a full-time job. I bought a house. I got married. My stepson moved in. I had a daughter. I wrote a book. I had another daughter. I quit my job. A close friend died of cancer. When you glance from one licence to the next, you don’t see the long nights I spent tossing and turning, working up the courage to ditch my boyfriend. You don’t see me painting the walls of my house alone, trying to accept my uncertain future. You can’t see me driving through the south of Spain with my future husband, or big and pregnant a year later, pulling weeds out of my front yard in a fit of hormonal mania. You don’t hear the breast pump — ahwooonga, ahwoonga — or feel that sinking guilt I had when I left the baby at day care for the first time. You don’t see me at the beach with my kids, smearing sunscreen on my face and hoping that no one eats sand when I’m not looking. You don’t see my hands shaking as I crush up pills, trying to help my friend die a peaceful death of colon cancer, wondering if there even is such a thing. A lot can happen in 10 years. You can’t be carefree forever. But when I was just 33, I thought that I would never have the bad taste to grow old, let alone allow it to depress me. I thought I was better than this. What is youth, but the ability to nurse a superiority complex beyond all reason, to suspend disbelief indefinitely, to imagine yourself immune to the plagues and perils faced by mortal humans? But one day, you wake up and you realise that you’re not immune. When my driver’s licence photo arrives a week later, it feels like an omen of my impending decline. My hair is limp and scraggly, I have dark circles under my eyes. I look like the ‘after’ photo in one of those photo essays on the ravages of crystal meth. I have the blank but guilty look of a sex offender. It’s maybe the shittiest photo of me ever taken, and now I have to carry it with me everywhere I go. On the bright side, my husband and I spend a good half-hour passing the licence back and forth, laughing at how hideous it is. But privately, I wonder if I have the face of a woman who missed out on something. This is the shape my mid-life crisis is taking: I’m worried about what I have time to accomplish before I get too old to do anything. I’m fixated on what my life should look like by now. I’m angry at myself, because I should look better, I should be in better shape, I should be writing more, I should be a better cook and a more present, enthusiastic mother. I go online looking for inspiration, but all I find is evidence that everyone in the world is more energetic than me. Thanks to blogs and Twitter and Facebook, I can sift through the proof that hundreds of other people aren’t slouching through life. They’re thriving in their big houses in beautiful cities, they’re cooking delicious organic meals for their children, and writing timely thank you notes to their aunts and uncles and mothers for the delightful gift that was sent in the mail and arrived right on time for Florenza’s third birthday. When I was younger, I thought I might wake up one day and be different: more sophisticated, more ambitious, more organised Forget those weary strangers at the DMV. This country is apparently populated by highly effective, hip professional women, running around from yoga class to writing workshop, their fashionable outfits pulled taut over their abs of steel, chirping happily at each other about the upcoming publication of their second poetry chapbook — which is really going to make the move to the remodelled loft a little hectic, but hey, that’s life when you’re beautifulish and smartish and hopelessly productive! It’s not enough that I know all about their countless hobbies and activities and pet projects and book clubs. I’m also treated to professional-looking shots of their photogenic families, their handsome, successful husbands and their darling children who are always hugging kitty cats or laughing joyfully on pristine beaches, children who are filled with wonder around the clock. Their children never pee in their Tinker Bell undies by accident and then whine about going commando, just for example. But maybe that’s because their children have parents who never lose their tempers or heat up frozen fish sticks for dinner or forget to do the laundry. Their kids have parents who let them sleep under the stars at Joshua Tree, and no one soils her sleeping bag or has a bad trip from too many corn-syrup-infused juice boxes. Dear sweet merciful lord, deliver me from these deliriously happy parents, frolicking in paradise, publishing books, competing in triathlons, crafting jewellery, speaking to at-risk youth, painting bird houses, and raving about the new cardio ballet place that gives you an ass like a basketball. Keep me safe from these serene, positive-thinking hipster moms, with their fucking handmade recycled crafts and their mid-century modern furniture and their glowing skin and their optimism and their happy-go-lucky posts about their family’s next trip to a delightful boutique hotel in Bali. I am not physically capable of being that effective or that effusive. I can’t knit and do yoga and smile at strangers and apply mascara every morning. These people remind me that I’ll never magically become the kind of person who shows up on time, looks fabulous, launches a multimillion-dollar business, and travels the world. When I was younger, I thought I might wake up one day and be different: more sophisticated, more ambitious, more organised. Back then, my ambivalence, my odd shoes, my bad hair seemed more like a style choice. When you’re young, being sloppy and cynical and spaced-out looks good on you. But my flaws are no longer excusable. I need to fix everything, a voice inside keeps telling me. It’s time to be an efficient professional human, at long last, and a great mother and an adoring wife. It’s time to shower on a predictable schedule. No matter how fervently I try to will myself into some productive adult’s reality, though, I’m still that 43-year-old superfreak in my driver’s licence photo. Some day, one of my daughters will hold this licence in her hand and feel sorry for me, long after I’m gone. ‘She was only 43 in this one. But, Jesus, look at that awful hair. And that look on her face. Why does she look so down? Or is that fear? What was she so afraid of?’ I don’t want my daughters to look at me — then or now — and see someone who’s disappointed in herself. At the very least, I have to change that. That woman on the curb probably looks great in her driver’s licence photo, because she isn’t afraid of falling short One Sunday morning, when I was running out to get some groceries, I saw a big woman standing on the sidewalk, waving a Yard Sale sign around. She was wearing an outfit that didn’t compliment her body. Her boobs were jiggling and bouncing in a wild way, but she was smiling and shaking this big piece of cardboard with something scrawled on it. You could barely read the words. The writing was in ballpoint pen and maybe she ran out of room for the address because the last part was squeezed in there, and then there was this huge space under the words anyway. The whole thing was very unprofessional, the kind of thing that, if I had done it myself, I would’ve ripped it up, declaring it unacceptable, and then I would’ve complained about how I didn’t have anymore goddamn poster board to start another sign. Then I probably would’ve blamed my husband for not buying more poster board at the drugstore. ‘When I say get some poster board, that word “some” means more than one piece.’ I also would not have put on that outfit, if I were as big as she was. I’m not slender, mind you. But let’s be honest: if I were her, I would’ve looked in the mirror and moaned softly and then crawled back into bed. Even with a perfectly good outfit on, I wouldn’t have agreed to stand on the curb with a bad sign, drawing attention to myself. No way. If I were her, I would’ve made my husband stand around with the sign, and then I would’ve blamed him when the yard sale got too crowded and hectic. ‘Where have you been? I can’t handle this whole thing on my own! This was YOUR IDEA IN THE FIRST PLACE!’ But that morning, I sat at the intersection in my idling car and watched that woman bouncing around, and even though I was in a bad mood, she made me smile. She had swagger. She didn’t give a shit that she looked a little unwieldy out there, jumping up and down, boobs jiggling. She didn’t care that her sign sucked. And the drivers in the cars next to me were smiling and waving at her, and some of them were men, too. They weren’t giving her a cheap, ‘Hey there, little hottie!’ wave, they were giving her an appreciative, you-made-my-morning wave. They liked the cut of her jib. And so did I. I need to be more like that woman. I’m 43 years old now, goddamn it, and my life is amazing. So why am I comparing myself to some styled professional in my head? Right now in my life, I keep ripping up the stupid sign and starting over. I keep saying: ‘This is all wrong. YOU are all wrong.’ I keep saying: ‘You messed up. You should be on your third novel by now. You’re running out of time.’ When did I fall into the habit of seeing myself in such a bleak light? That woman on the curb probably looks great in her driver’s licence photo, because she isn’t afraid of falling short. No one can tell her what she can and can’t do, what she should and should not expect. I guarantee you, that woman doesn’t give a fuck about mid-century modern furniture or organic dairy farms in Wisconsin. Maybe her house needs to be vacuumed, and her hair colour needs a touch-up. So what? She doesn’t do yoga and she doesn’t consider that a personal failing of hers. She doesn’t ask herself whether or not she has it all. She has other stuff to do. She looks in the mirror and sees a dishevelled fortysomething and she feels good. She is just a person in the world. She’s not indistinct, though. She knows that she’s someone with ideas, with spirit, with heart. She is someone who can make strangers smile and feel really good inside, for no reason at all. That’s what it looks like to accept what you have. That’s what it looks like to feel grateful for who you are, in all of your messy, fucked-up glory. The next time that DMV flash goes off in my face, I’m going to think about her. | Heather Havrilesky | https://aeon.co//essays/how-did-i-end-up-growing-old | |
Bioethics | Animals have thoughts, feelings and personality. Why have we taken so long to catch up with animal consciousness? | I met my first semipalmated sandpiper in a crook of Jamaica Bay, an overlooked shore strewn with broken bottles and religious offerings at the edge of New York City. I didn’t know what it was called, this small, dun-and-white bird running the flats like a wind-up toy, stopping to peck mud and racing to join another bird like itself, and then more. Soon a flock formed, several hundred fast-trotting feeders that at some secret signal took flight, wheeling with the flashing synchronisation that researchers observing starlings have mathematically likened to avalanche formation and liquids turning to gas. Entranced, I spent the afternoon watching them. The birds were too wary to approach, but if I stayed in one spot they would eventually come to me. They followed the tideline, retreating when waves arrived, and rushing forward as they receded, a strangely affecting parade. When they came very close, their soft, peeping vocalisations enveloped me. That night I looked at photographs I’d taken, marvelling as the birds’ beauty emerged from stillness and enlargement, each tiny feather on their backs a masterpiece of browns. I looked up their scientific classification, Calidris pusilla, conversationally known as the semipalmated sandpiper — a name derived from a combination of their piping signal calls and the partially webbed feet that keep them from sinking in the tidal sand flats of their habitat, where they eat molluscs, insect larvae and diatom algae growing in shallow, sun-heated seawater. I learned that semipalmated sandpipers are the most common shorebird in North America, with an estimated population around 1.9 million. My copy of Lives of North American Birds (1996) described them as ‘small and plain in appearance’, which seemed unappreciative, especially in light of their migratory habits. Small enough to fit in my hand, they breed in the Arctic and winter on South America’s northern coasts, flying several thousand miles each spring and fall, stopping just once or twice. The flock I’d watched was a thread in a string of globe-encircling energy and life, fragile yet ancient, linking my afternoon to Suriname and the tundra. At that fact, I felt the sense of wonder and connection that all migratory birds inspire. Yet not once did I wonder what they thought and felt along the way. How did they experience their own lives, not just as members of a species, but as individuals? It was a question outside my habits of thought, and occurred to me only months later, when I interviewed the American artist James Prosek. His compendium Trout: An Illustrated History (1996) had earned comparisons to the great American ornithologist and painter John James Audubon. Prosek’s paintings are indeed beautiful and his book, published while he was still an undergraduate, was shaped by a tradition of field guides and natural histories. Prosek had not personally encountered many of the trout and salmon species that he painted. Instead, he accepted on faith their place within established taxonomic classifications. But those classifications would soon be rearranged by the application of molecular genetics to the taxonomy of the salmonids, a rearrangement that encouraged Prosek’s deepening appreciation for how varied fish of the same species or subspecies, or even the same watershed, could be. The field guide notion of a species ‘type’ felt inadequate, even misleading. Prosek’s contemplations culminated in the glorious paintings of his latest book, Ocean Fishes (2012): he made a simple but profound decision to paint the specific, individual fishes he encountered. The Linnaean system of classification — a hierarchical naming structure introduced by the Swedish botanist Carl Linnaeus in 1735 — might describe the world and its generalities, implied Prosek, but it could not capture the richness of an individual life. Several months after meeting Prosek, I was walking in Jamaica Bay on a bitterly cold and cloudless day when I saw semipalmated sandpipers again, running ahead of a pounding surf that caught the afternoon sun and sprayed their retreats with prisms. As Elizabeth Bishop observed in her poem ‘Sandpiper’ (1955): ‘The roaring alongside he takes for granted,/and that every so often the world is bound to shake.’ I wondered what it would be like to be one of them, to run with the flock and feed in the surf, to experience life at their scale and society. Simply put, did they enjoy it? Were they cold? Did they remember their journeys, feel a connection to individuals with whom they’d flown, a concern for compatriots and mates? Asking those questions made me appreciate just how deeply I’d internalised the taxonomic system against which Prosek strained, as well as the habit of explaining animal behaviour in mechanical terms. I’d regarded the sandpipers as embodiments of their species and life history, but not as individuals, much less as selves. This oversight was not coincidental. The very history of taxonomy and attendant studies of animal behaviour is intertwined with a denial of individual animal consciousness. Scientific taxonomy began not in the 18th century with Carl Linnaeus but some 2,000 years earlier in ancient Greece, with philosophers who venerated rationality and the power of language. Before them, and especially during humankind’s long prehistory, animal deaths at our hands might have been necessary or justifiable, but they were also seen as unfortunate, and we offered thanks and apologies, as evidenced in paintings, artifacts and ritual. The most rationalistic of Greek thinkers washed their hands of such sentiments. Aristotle introduced the notion of binomial nomenclature, grouping animals by whether or not they had blood, and whether they lived on land or in water, in a hierarchy with humans at the top. In his view, animals were incapable of any sensations but pain and hunger. Brutal as this sounds, Aristotle was practically an ancient Peter Singer compared with the Stoics such as Zeno of Citium, who insisted that animals felt nothing at all. This view influenced early Christian thought and, eventually, René Descartes, according to whom animals were all body and no mind, no different from the lifelike mechanical toys popular in 17th-century France. Descartes’ influence is manifest in the infamous words of the French rationalist Nicolas Malebranche, who said in The Search After Truth and Elucidations (1674) that animals ‘eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.’ Not everyone agreed. Notable critics included Thomas Hobbes, Spinoza and Voltaire, but their objections held little sway in an era of triumph for mathematics and the physical sciences. It was an intellectual moment most unfavourable to what could be felt but not quantified. Thus beliefs about animals that would be considered psychopathic if acted out by a 21st-century child became tenets of Western scientific thought and, in this milieu, taxonomy as we know it took form. The science of taxonomy was driven by wonder and new discoveries in faraway lands, but this was not the whole of it. As Michel Foucault notes in The Order of Things (1966), people had always been interested in plants and animals. What taxonomy satisfied was not simply curiosity but a desire for an overarching order to the world. Linnaean classification was triumphant among dozens of competing, lesser taxonomical schemes, but they all served a common project of bringing nature’s wild diversity to Enlightenment heel, of putting the messy living world in tabular form. The great beauty of evolution, its essential profundity, is in placing humans among animals, not only in body but in mind Linnaeus did have an extraordinary eye for detail, and combined his supreme ambition with a simple and powerful system for classification. It worked by comparing a few clearly visible, easily measurable anatomical traits: his natural history was based purely on surfaces. A century later, the French naturalist Georges Cuvier revolutionised taxonomy by introducing comparisons of internal anatomy, but, as far as the inner lives of animals went, this too was a superficial revolution. It was a science of gross anatomy, not of minds, reflecting Descartes’s mechanistic image of animals as assemblages of pieces. By the time of Cuvier, science had an entrenched species-first filter through which nature would be scientifically and culturally apprehended. Taxonomic science was far, however, from arbitrary. It was, and is, a wonderful means of describing the variations that do exist in the natural world. Taxonomy – or modern-day systematics – provides a language with which it is possible to understand the sandpipers in that crook of Jamaica Bay as being part of a related group including oystercatchers and common terns. With that language, it’s also possible to note that semipalmated sandpipers can live for more than a decade, take mates in monogamous relationships that may persist for years, eat a lot of horseshoe crab larvae while migrating, and have declined in population by roughly one-third since the 1980s. Most importantly, taxonomy was a scaffold upon which evolutionary theory could be built. Although Linnaeus had believed the variation among animals was an immutable arrangement and divinely apportioned, evolutionary thinkers realised that these were family resemblances, to be elucidated more than a century later by Charles Darwin. And the great beauty of evolution, its essential profundity, is in placing humans among animals, not only in body but in mind. Just as humans shared physical traits with other creatures, Darwin argued, so we also shared mental traits. The ability to think and feel was just another adaptation to life’s uncertainties and hazards, and, given our evolutionary relatedness to all other living things, it made no sense for them to be unique to us. ‘Even insects express anger, terror, jealousy, and love,’ he wrote in The Expression of the Emotions in Man and Animals (1871). His protégé George Romanes, who was an avid collector of anecdotes about intelligent cats and dogs, thought that animal behaviour should be interpreted in light of our own capacities. ‘Whenever we see a living organism apparently exerting an intentional choice,’ wrote Romanes in Animal Intelligence (1884), ‘we might infer that it is a conscious choice.’ Intelligence is ubiquitous, not just in chimpanzees, dolphins and parrots, but in octopuses, archerfish, prairie dogs and honeybees — a veritable Noah’s Ark of braininess By emphasising the kinship between animals and human beings, Darwinian taxonomy could have opened the door to thinking about the consciousness of individual animals. But, instead, the opposite happened. Even as evolution’s mechanics were accepted and expanded, the views of Darwin and Romanes on individual animal consciousness were rejected, consigned to cautionary tales of how even the most brilliant scientists can get things wrong. By the 1940s, when the great systematist Ernst Mayr settled on a fuzzy but useful standard definition of a species — as a population with a common reproductive lineage that could interbreed — the possibility of animal consciousness and individuality, so evident to anyone with a pet dog or cat, was largely eliminated from mainstream science. We could accept our animal bodies, and classify ourselves on that basis, yet had to avoid the implication that animals might have human-like minds. A new age of machines and industry spawned the behaviourism of the psychologist B F Skinner who, echoing Aristotle and Descartes, proposed that animals were nothing but conduits of stimulus and response (as were humans). Seeming evidence of higher thought was an illusion produced by some simpler mechanism. It’s true that behaviourism helped to establish protocols by which animal cognition could eventually be studied in rigorous, scientifically acceptable fashion. But the price was steep: decades would pass before scientists began to allow that some animals might be more than biological automata. In the 1960s, Jane Goodall was mocked by her primatologist peers for speaking of chimpanzee emotions, such as a mother grieving for her dead infant. Even her use of gender-specific terms for individual chimpanzees was seen as anthropomorphic and unscientific. As the journalist Virginia Morell recounts in Animal Wise (2013), Goodall’s editor at the prestigious journal Nature tried to replace ‘he’ and ‘she’ with ‘it’ in her first manuscript. When the zoologist Donald Griffin wrote in The Question of Animal Awareness (1976) that biologists should investigate ‘the possibility that mental experiences occur in animals and have important impacts on their behaviours’, it was still a radical suggestion. These days, Goodall is a hero, Griffin a prophet, and studies of animal intelligence ubiquitous: not just in chimpanzees, dolphins and parrots, but in octopuses, archerfish, prairie dogs and honeybees — a veritable Noah’s Ark of braininess. Caveats remain, of course. Intelligence is relatively easy to study, but it isn’t quite the same thing as consciousness, nor emotional life. It’s been less controversial to ask whether rats remember where they stored food than whether one rat cares for another. Yet even rats, it turns out, feel some empathy for one another. A team at the University of Chicago found that rats became agitated when seeing surgery performed on other rats, and a follow-up study in 2011 found that, when presented with a trapped labmate and a piece of chocolate, rats free their caged brethren before eating. Those who study animal behaviour are still careful when talking about subjective experience — sure, Eurasian jays can guess what their mates want to eat, but who knows if they like each other? — but they’re being professionally cautious rather than dismissive. The average person can safely speculate away: animal consciousness is a reasonable default assumption, at least for vertebrates, and not just in some dim sense of the word, but possessing forms of self-awareness, empathy, emotion, memory, and an internal representation of reality. Many of the characteristics thought to be important for higher consciousness (such as brain size) and a sense of individuality (in humans and — maybe, just maybe — a few other great apes and cetaceans) aren’t so unique anymore, or are no longer considered very important. Features such as working memory and episodic memory — keeping multiple pieces of information in mind and remembering what has happened, the cognitive fundaments of conscious experience — appear to be widespread. And the environmental challenges that might prompt the evolution of consciousness are widespread, too. Among these is sociality: if you’re going to live with others, it’s very useful to be conscious of them. And the distinction between cognition and emotion is increasingly seen as a false one: certainly in humans, they are more or less inseparable systems. In July last year, a group of high-profile neuroscientists signed the Cambridge Declaration on Consciousness with the announcement that: The weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates. Those other creatures likely include a great many reptiles, amphibians and fish. They tend to be underappreciated because they’re even harder to study than mammals, birds and octopuses, and seem, well, a bit inscrutable. Consciousness is necessary to be an individual, to have unique thoughts and feelings rooted in one’s own experience of life — and the animal kingdom teems with it. Many scientists still don’t know this, or don’t accept it. The whale biologist Shane Gero is part of a research team that has conducted long-term sperm-whale studies off the island of Domenica in the Caribbean. These studies describe the dynamics of whale families in which children are, in a very real sense, the centre of their lives. Yet Gero told me of being chastised by colleagues for referring to animals by name rather than number. Pressure still exists to think not of individuals, but of general species traits that happen to be manifested in a particular animal. Gero has helped to decode the vocalisations that sperm whales might use as names, something that’s also been observed in dolphins, but this remains controversial. That’s why a visitor to the ‘Whales: Giants of the Deep’ exhibition at the American Museum of Natural History in New York can learn a lot about their skeletons, heart capacity and navigational abilities, but barely anything about their intelligence and social lives — arguably the most dynamic area of contemporary cetacean research. All those cute cat videos, reliably mocked as a symptom of our unintellectual internet habits, bespeak our era’s willingness to acknowledge the inner lives of companion animals Not surprisingly, the science of animal personality is still young. Recognition of animal consciousness might be just a first step. Individual differences based on temperament and experience, again so obvious to pet owners, is a new idea in science. For sperm whales, says Gero, such differences were once dismissed as statistical noise or evidence of behavioural maladaptation. The blind spot is hardly restricted to whales. The article ‘Energy metabolism and animal personality’ published in the journal Oikos in 2008 pointed out that ‘personality will introduce variability in resting metabolic rate measures because individuals consistently differ in their stress response, exploration or activity levels….’ Animals that have ‘frozen’ with fear during capture might be misclassified as having high resting metabolic rates, when in fact a motionless rabbit with his heart racing might simply be scared. This seems like common sense, and in some respects the general public outpaces much of the scientific community, at least when it comes to the familiar animals we live with and know well. All those cute cat videos, reliably mocked as a symptom of our unintellectual internet habits, bespeak our era’s willingness to acknowledge the inner lives of companion animals. Not that they’re tiny humans in kitten suits, of course — indeed, part of the fun in knowing a cat (not to mention watching those videos) is the obvious disparity between their view of the world and our own. But neither are they entirely incomprehensible, per Ludwig Wittgenstein’s enigmatic statement: ‘If a lion could speak, we would not understand him.’ Wittgenstein probably never saw a pair of lion cubs at play What might it mean to treat all vertebrates as having some form of consciousness and individuality? Animal welfare advocates campaign for the better treatment of companion and farm animals, which is a noble cause. But I am more interested in wild animals, our neighbours in nature. To the painter James Prosek, seeing wild animals as individuals offers a new and sorely needed conservation ethos. Biodiversity and ecosystem services make for well-meaning but often uninspiring rhetoric; they value nature generally, but provide little reason to care for actual creatures in a nearby forest or your backyard. Acknowledged as individuals, those sparrows, salamanders and squirrels are not interchangeable parts of a species machine. They are beings with their own inner lives and experiences. Does this mean we should never eat a salmon, or cut down a tree to build a house? Not necessarily. We might simply acknowledge the consequences of our actions, and offer apologies and thanks to those creatures we affect. It’s the sort of ethical equation people need to solve for themselves. For myself, I’d be happy to see a revival of naturalist language, the sort of charming, unapologetically anthropomorphic descriptions one finds in old field guides, written before the ascendance of the 20th century’s airless, specialist vernacular. It’s a voice heard in The Birds of Essex County, Massachusetts (1905) in which Charles Wendell Townsend described a ‘low, rolling gossipy note’ voiced by semipalmated sandpipers approaching other birds. He waxed eloquent about their courtship, the male ‘pouring forth a succession of musical notes, continuous wavering trill, and ending with a few very sweet notes that recall those of a goldfinch… one may be lucky enough, if near at hand, to hear a low musical cluck from the excited bird. This is, I suppose, the full love flight-song.’ It is the language of a man who cares. I’m happy to know simply that the birds I’ve seen have their own private worlds, their own sense of light and companionship And what of the semipalmated sandpiper, a few of which I last saw at low tide on Labor Day? Is it appropriate to use words such as gossip and love, to think of their self-awareness? I put the question to the British ornithologist Tim Birkhead, whose latest book is Bird Sense: What It’s Like to Be a Bird (2012). He told me he couldn’t recall any behavioural tests of sandpipers, nor rigorous comparisons to crows or parrots, but still, he said: ‘You can guess that they have more sophisticated cognitive abilities than most people would give them credit for.’ Given everything we know about animal consciousness, and the primal nature of both our own emotions and our social bonds, it certainly seems reasonable to err on the side of personalising the birds. Birkhead told me an anecdote about a red knot — Calidris canutus, a close relative of the semipalmated sandpiper — found injured in 1980 on the north Dutch coast by a middle-aged couple. Jaap and Map Brasser named him Peter and nursed him to health. Peter never flew again, and lived with the Brassers and their dog Bolletje for nearly 20 years. Each afternoon he received half a loaf of bread, not so much to eat as to peck; Peter felt an instinctive need to forage, and became agitated if he couldn’t. At night he rested quietly at their feet, stirring when wildlife shows came on television. He and the dog became companions. Years after Bolletje died, recordings of his barks brought Peter running. That Peter would bond with a dog isn’t so unusual. Red knots are social birds and, as we’ve seen, sociality is a great evolutionary driver of consciousness. What was unusual was a change in Peter’s internal clock, which naturally guided his migratory transformation. Rather than following the seasons, it became synchronised to his new family. Ornithologist Theunis Piersma speculated that Peter ‘developed his own personal cycle and … stayed red as long as possible hoping that Jaap, Map and the dog would also become fat and change colour, after which they would all depart for Greenland.’ Of course, the Brassers knew Peter well, whereas I’ve only glimpsed semipalmated sandpipers. I can’t truly know what goes on their heads. Yet at some point this becomes irrelevant: we can’t ever really know what goes on in another person’s mind, but we manage all the same. I’m happy to know simply that the birds I’ve seen have their own private worlds, their own sense of light and companionship. They go to sleep expecting to wake again. Perhaps they have names for each other. I just don’t know what they are. | Brandon Keim | https://aeon.co//essays/what-is-it-like-to-be-a-bird-the-science-of-animal-consciousness | |
Human reproduction | If you were a poor Ugandan mother with a desperately ill baby, would you turn to Western medicine or the village healer? | It was late February when I heard the story of a young seamstress whose baby girl had died from an unknown illness. The mother, Evelyn Bonabone, lived with her grandmother about an hour outside Mbarara, a compact, hectic city in western Uganda with a medical school and one of the country’s better hospitals – although that’s not saying much. Rather than visit this hospital with her child for treatment, Evelyn had been to a traditional healer named Nahayo, an old widow across the valley. Early one morning, I joined a health worker and we drove out of the city, beyond the trading post where the men ride in from the hills with improbably large bunches of bananas strapped to their rusty bicycles. We turned into the countryside and passed a soccer pitch, a health centre, and endless plantations of bananas. Our battered vehicle creaked and groaned as we began our ascent. Eventually, we pulled into a smaller track, carved into a loamy ledge overlooking an isolated valley, and stopped in front of a single-story home on the hillside. A couple of shirtless men were chopping down what appeared to be the very last tree on the property. Evelyn emerged in a worn tank top, her hair tucked under a vibrant head wrap, and a silver cross dangling from green beads around her neck. She brought us inside the living room, where a few plastic chairs were arranged against the walls. In one corner sat an immaculate old black Singer sewing machine and a battery-powered radio. Evelyn told us that she had been shattered by the death of her baby. She quit sewing and working in the fields, and slept for a week straight, wrapped in a sheet on the concrete floor of her bedroom. Her eyes grew watery as she thought back to her daughter’s birth one year earlier, helped by her mother’s mother. After a trip to Mbarara on the back of a motorcycle, the baby girl had trouble breathing and was crying excessively. The old women in her village gave her herbs to rub on the baby’s gums. After three days, they told her the baby might have ‘false teeth’: if these were not removed, a maggot could emerge and the baby would die. If the child had this disease, they informed her, it should not be taken to the hospital. They insisted that she visit the tooth extractor, Nahayo. She felt she had no choice but to obey. Evelyn’s story jibed with what I’d read about the supposed causes of Ebiino or ‘false-tooth disease’. It is believed to occur from an allergy, or a maggot inside the gum, or a meal of infected maize. A pregnant woman passing a false tooth discarded on the street can supposedly transfer it to her child. Some tooth extractors are said to cause the disease themselves by bewitching pregnant mothers. It’s easy money. If the causes of false-tooth disease seem dubious, its symptoms are all too familiar to anyone who has cared for an infant: vomiting, diarrhoea, failure to breastfeed, and swelling of the gums. Nahayo exposes a baby’s gums.Given Uganda’s abominable rate of infant mortality (54 deaths per 1,000 births, according to the Uganda Demographic Health Survey of 2011, compared with six per 1,000 in the US), it’s no wonder that the women here have little trust in Western medicine. The paediatric ward of the Mbarara hospital is a foreign place to them. It is understaffed, disorganised, and reeks of urine. Government healthcare is supposedly free, but you hear stories of patients being extorted. They must also provide their own food, and food in the city is expensive. Most importantly, even the best-trained doctors do not — or cannot — provide the familial comforts a woman such as Evelyn can find among her own people, in her own village. East African tooth extraction is born of a similar sense of impotence in the face of bodily mysteries On that morning, Evelyn wrapped the sick baby in fabric and slung her from her back in the traditional style. She followed a trail to the town of Kinkoma, where the tooth extractor’s crumbling mud-brick, straw-roofed house sits on a steep slope. Nahayo took the baby in her arms, pressed open the mouth with her thumb and forefinger, and plunged a sharpened bicycle spoke into the gums. There was some blood. Not much. When Nahayo finished, she handed the mother two tiny white pearls, the buds of the infant’s incisors, which she was instructed to hide under a rock. Evelyn paid her 4,000 shillings, or about $US1.50. Afterwards, the infant refused to breastfeed, and Nahayo referred her to a herbalist in another town. By the time Evelyn arrived there, her baby seemed limp and unresponsive. The herbalist told her to make haste to the hospital, so Evelyn ran home to gather her things. On her way, her aunt pulled her aside and took the quiet infant in her arms. She told Evelyn that the child was dead. I have no idea what actually killed Evelyn’s daughter, nor do the health workers who have reviewed her case. The baby could have had anything from tuberculosis to the flu, or a genetic disease. It’s possible that the doctors at the Mbarara hospital could have saved the child’s life, or maybe not. The only thing that seems certain is that the tooth extraction didn’t help. I asked Evelyn what she would do if her next child becomes as ill as this one. She assured me that she will go to the hospital first, but if the symptoms persist, she said, she will return to Nahayo. ‘But why?’ I asked. ‘It didn’t save your child’s life this time.’ ‘I have no answer,’ she said. Evelyn Bonabone’s home in rural Uganda.For the past several months, I’ve been researching traditional medicine around the world to understand the role it plays in global health. In some African countries up to 80 per cent of people consider traditional healers their primary caregiver, and the numbers are nearly as high in other parts of the developing world. It is fair to say that traditional knowledge, including the setting of broken bones and the cultivation of medicinal herbs, can be impressive. For instance, the people of Oman on the Arabian Peninsula have a strikingly simple and effective way of preventing blindness from the trachoma parasite: they stick their eyelashes away from their eyelids. And what, after all, is male circumcision but a traditional Egyptian practice that has become part of the medical mainstream in many Western communities? Yet ‘false tooth’ disease is something else: a strange and dangerous new hybrid of traditional practice and half-understood Western biomedical treatment. For the rest of my trip to Uganda, I puzzled over my meeting with Evelyn. When I returned to New York, I went to the library where I read Pagan Tribes of the Nilotic Sudan (1932) by Charles Seligman, the British physician-turned-anthropologist, and his wife Brenda, who first ventured up the Nile in 1909. On their travels, the Seligmans had observed children who strung beads across a wide gap in their lower row of teeth to keep their lips from folding inward. They had knocked out four incisors in a crude and painful procedure – although preventing disease was not their primary concern. It was a matter of attractiveness — a rite of passage that occurred between the ages of 10 and 12 during the dry season. The ritual had a mystical undercurrent: if a child died with their incisors still intact, their friends would extract them from the corpse to prevent misfortune from befalling the family. The operation was widespread in the Sudan by the time of the Seligmans’ journey, but the specifics varied from tribe to tribe. The Nuer people of South Sudan, for instance, believed that the teeth must be removed shortly after birth. ‘When the infant is from a few days to a month old,’ the Seligmans wrote, ‘the canines are dug out of the jaw with a piece of iron, while the lower incisors are removed with a fish hook when the boys are about eight years old.’ Over the next 40 years, the practice of teeth extraction showed no signs of declining in East Africa, in fact it spread from the Sudan southward into Uganda, Kenya and Tanzania. At some point — and here the timeline becomes fuzzy — Ugandans came to believe that the teeth they were extracting were not, in fact, authentic. They called them Ebiino, which means false or nylon teeth. In 1966, a Danish dentist found that the Acholi people, who live near the Sudanese border, were regularly dislodging their infants’ canines with some terrible long-term effects. One 18-year-old boy had a split canine that looked like a pair of pincers in repose. One had a canine shaped like a peg. Another had one that looked like a tooth sprouting from another tooth. Teething, which mark’s a child’s first step towards independence, has long been a source of concern to healers both in Africa and beyond. In 1732 the British physician John Arbuthnot wrote that ‘Above one-tenth part of all children die in teething’. Teething was listed as the cause of 5,016 deaths in England and Wales in the year 1839, and until the beginning of the 20th century, the gum-lance was a staple item in every medical bag in the US and Europe. ‘A very common cause of diseases of the stomach and bowels, and also of convulsions in children, is to be found in the hardening or induration of the gums at the time of teething,’ reads the 1905 Home Encyclopedia of Health, ‘and this blunder of nature’s ought to be promptly remedied… by the use of the lancet.’ In other words, doctors would cut the gums of an infant down to its tiny teeth. Sometimes, they would repeat the procedure week after week. East African tooth extraction, if not a direct descendant of Western gum-lancing, is born of a similar sense of impotence in the face of bodily mysteries and sickness. Medicine in much of rural Africa today is comparable with that of 18th-century London. According to the latest figures from the World Health Organisation, one in seven children in Uganda dies before reaching the age of five, and parents rarely receive accurate diagnoses, let alone treatment, from the overtaxed healthcare system — whether the child has a harmless fever, a bout of diarrhoea, or a deadly infection. ‘False tooth disease’ is considered a new disease in Uganda: it is thought to have spread south when the army of president Idi Amin conquered and terrorised the country in the 1970s. Some locals have said it was because women started eating pork or chicken. Others simply that it came with the war. In the journal Culture, Medicine and Psychiatry in 2000, the Danish anthropologist Hanne Overgaard Mogensen called false teeth ‘an idiom of distress’. ‘There is with false teeth a focus on what you can see, something very concrete, an entity that has to be removed’. To Mogensen, ‘false-tooth disease’ is less an illness than a tangible means of combating the intractable challenges of a life of poverty and war. Once a rite of passage into childhood, extracting teeth has become a means to ‘cure’ sick babies of an imagined ‘disease’ not unlike the way parasites and other infections are seen to be cured by antibiotics and other treatments in Western medical clinics. A sharpened bicycle spoke used to extract teeth.Today, ‘false tooth’ extraction has become a source of deep concern for health workers and NGOs in East Africa. A paediatrician in Uganda told me about a mother arriving in his ward carrying a two-day-old infant with blood pooling in his mouth. The infant received a transfusion and was out the door in a few days, but others have died from infection or blood loss. The British charity Dentaid calls tooth extraction ‘infant oral mutilation’, a charged reference meant to evoke the same humanitarian outrage as female circumcision. One study in the Ntungamo district of western Uganda, funded by the US Agency for International Development, estimated that 45.2 per cent of children suffering from diarrhoea had their ‘false teeth’ extracted, and another study found that more than half of the households surveyed had had a case of childhood ‘false-tooth disease’ in the past five years. In addition to Sudan and Uganda, ‘false-tooth’ extraction is now practiced in Somalia, Kenya, Ethiopia, and Tanzania, and as far away as Australia, among migrant Sudanese and Somali communities. Across many cultures, magical thinking and medicine have been closely intertwined, and it is not surprising that this is the case in present-day Africa. Here, the causes of one child’s death are often as murky as the reasons for the survival of another, and so the human mind seeks to fill in the gap. Indeed, we know that belief alone can be as powerful as the most sophisticated drugs. Many clinical trials have shown that patients taking a placebo do better than those taking nothing at all. Perhaps a dubious (even dangerous) treatment such as tooth extraction provides some relief — for the mother, if not for the child. ‘The removal of false teeth may seem brutal,’ the anthropologist Mogensen writes, ‘but it is also something that makes the mother keep an eye open as to the development of her child.’ It is one way for a mother to be clear of conscience, should her child die in her arms. Embracing African medicine means not romanticising it. Science should be the great leveller: it doesn’t matter where brilliant ideas come from, so long as they work There was a time when doctors in the West attributed the great epidemics, such as plague and cholera, to an unhealthy fog, a noxious gas rising up from the putrid waste-choked alleys of London or the marshes outside of Rome. During the cholera outbreak of 1854 in London, which killed more than 600 people, the phycisian John Snow traced the disease to a water pump in Broad Street in Soho that he believed to be contaminated by a nearby cesspit, but his theory was largely rejected at the time. The water appeared clear and clean to the naked eye, while the London air reeked of sewage. Over the next two decades, the miasma theory of disease was a catalyst for much-needed improvements in sanitation. As the prevalence of cholera and typhoid declined, the theory’s supporters felt vindicated. Yet the gains from eliminating miasma, like pulling a tooth, were hardly sufficient. It would take another 25 years before scientists demonstrated that tiny particles invisible to the naked eye — germs — were at the root of disease. Although magical thinking might be less pervasive in the sparkling hospital wards of Europe and the private clinics of New York than inside the healer’s hut, it is still significant. According to a survey on the British Medical Journal website in 2007, only about half of all clinical practices rest upon a solid evidence base. Plenty of doctors would rather prescribe a useless antibiotic for an earache than do nothing at all (and many parents pressure them to do so). Question them, and they will complain that their authority is being stripped away. They are second-guessed by Google, second-fiddle to the wonder-drug makers, and second-class in the eyes of the actuaries. More than anything, these doctors find themselves crushed under a sea of data, data that can conflict with their most cherished rituals and ideas. The annual checkup, argued Lasse T Krogsbøll and colleagues in the British Medical Journal last year, does no good and can even do harm. Ditto prostate screening, according to the final recommendations of the US preventive services task force in 2012. Bad cholesterol is not so bad after all, said researchers at Texas A&M University in 2011. Maurice Iwu, the esteemed Nigerian ethnopharmacologist and author of the Handbook of African Medicinal Plants (1993), has railed against Europeans who try to modernise African medicine by preserving its rational elements and dispensing with the magical ones. ‘The use of herbs in combination with the power of the human spirit, assistance from the gods, and other unseen forces constitutes a fundamental aspect of African ethnomedicine,’ he has written. Iwu and other proponents of traditional and alternative medicine often frame it as a cultural issue, suggesting that science, which they associate exclusively with the West, cannot possibly cast judgment outside that realm. Yet this prickly, defensive point of view serves only to marginalise further the positive components of traditional medicine, and make it more likely that the contributions of indigenous Africans to society will be all but forgotten. Embracing African medicine means not romanticising it. Science should be the great leveller: it doesn’t matter where brilliant ideas come from, so long as they work. Both Western doctors and African healers will always be tempted to be a little too optimistic about their own healing abilities, which is why it is so important that we use all of our best empirical tools to keep their delusions in check. After departing Evelyn’s house that morning, we crossed the valley in search of the tooth extractor Nahayo. We had to get out of the car and walk a few hundred metres up a trail to get to Nahayo’s house and, when we arrived, the door was closed and locked. No one was around. Nahayo’s son arrived from the fields and told us that this mother was at a funeral in a neighbouring village, and he would guide us there. His teeth appeared intact, but as he scrubbed his feet and put on a clean shirt, he showed us a series of circular scars running up his belly and across his back like perforations. It’s another traditional treatment for what the locals call Oburo or ‘millet disease’ possibly because the fat globules, which are cut out of the skin with the corner of a razor blade, resemble that grain. Each time the boy was cut, he went to the health clinic, received an injection of penicillin, and was better in a few days. One day, he said, he too wants to learn the secret of false-tooth and millet extraction. Nahayo the tooth extractor.We finally met Nahayo in a cramped, windowless room in the next village. A bony-looking woman with leathery skin and stringy hair, she seemed to relish our presence. She said she had been extracting false teeth since 1990, when she watched her brother-in-law perform the procedure. A good tooth extractor, she explained, removes only the false (or baby) teeth, leaving the adult teeth intact. Not every child has Ebiino, but she sometimes performs as many as three in a single day for children with fever, diarrhoea and vomiting. When I asked her about Evelyn’s daughter, she responded unequivocally. ‘I’ve never had complications from the treatment,’ she said. ‘It works all the time.’ Overconfidence is a sin that doctors of all stripes can fall into. It took 300 years before physicians in the West began to have their own doubts about the merits of gum-lancing. The practice was abandoned some time around the Seligmans’ early 20th-century journey up the Nile, but myths about teething persist worldwide to this day. A 2002 article in the British Dental Journal called teething a ‘wastebasket diagnosis’ that was proffered when no other cause can be found. A variety of studies have found that teething infants are irritable and, perhaps, a little feverish, but the data does not support a link with more serious conditions. Nevertheless, in 1990 a postal survey of 215 paediatricians in Florida found that 35 per cent of them believed there was a correlation between teething and diarrhoea. False-tooth disease is also a wastebasket diagnosis. Whatever ritualistic role tooth extraction once played in this culture, it is now only imposing a cost on the lives of children and their parents. I knew that, in order to eliminate it, a better, more effective healthcare system would need to rise up in its place. Some doctors and development experts working in Africa believe that this better system needs to start at the village level by educating the healers and traditional birth attendants whom people trust. Before we took leave of Nahayo, the health worker I travelled with asked the healer if she was willing to participate in a training programme to reduce child mortality in the area. They might teach Nahayo to recognise the symptoms of serious childhood diseases, such as tuberculosis and malaria, in order to refer children to the appropriate clinic. I was sceptical. Tooth extraction represented Nahayo’s livelihood and she wouldn’t be paid to refer patients, only to remove their teeth. But Nahayo nodded. She clearly took her role as a healer seriously, and we wanted to believe that reason would win out in the end. It just might take a while. | Brendan Borrell | https://aeon.co//essays/how-medicine-and-ritual-got-hopelessly-entangled-in-uganda | |
Food and drink | There’s nothing so atavistic and satisfying in the morning as sinking your hands into a sticky, bubbling live sourdough | The sort of bread I really like to make is the sort of bread that takes time. You have to start early in the morning, set your alarm even. Though if the thought is in your head you might find you wake naturally with the sunlight, feeling refreshed, squatting on time’s hoard. Just before seven, you pad down in your dressing gown and crack the lid on the big Tupperware pot of leaven. Treat yourself. Put your nose in and smell the sour, yeasty draught. Inspect the slow bubbles with approval. Then, positioning the big antique mixing bowl — pale blue inside, cream on the outside — on the electronic scales, ladle a great draught of leaven into it. It’s the consistency of thick batter, this leaven. Flour, salt, water will follow. With one hand, you start to mix — palm passing through the cool flour, fingertips deep in sticky leaven, which squidges back through the gaps between your fingers as you close your hand around it. Soon a wet glob of dough adheres to your hand. With your clean hand you smear a dollop of sunflower oil on the kitchen surface and, deftly as you can, you knead the dough on this, keeping it moving so it doesn’t stick. Somehow, you bring it to a rough ball — scraping it off your hands as you go – then you oil the mixing bowl, place the dough there and cover it with clingfilm. ‘Empty carbs,’ says my wife (when she’s not scoffing it). ‘Staff of life,’ say I Then you make a cup of tea. In 10 minutes you’ll knead it, reshape it and return it to the bowl. Then another 10 minutes. Then another 10 minutes. Then half an hour. Then an hour, and so on through the day. It needs regular intermittent attention, like a young baby. By around teatime, the magic will have happened. It will be elastic, risen, alive, with a gentle sheen on the surface and maybe a big bubble just visible. When you knock it back, it will pop and squeak slightly. After its final rise, you’ll shape it and put it into a well-floured, linen-lined banneton to prove. Finally, if you haven’t screwed up — if it doesn’t stick to the linen, tear, deflate and break your heart — you’ll turn it out onto a peel or paddle dusted with semolina, slash a swift cut or two on the top, spritz it with a fine mist of water and whack it into a very hot oven. If you’re like me, you’ll then watch it through the oven window — anxious, like the parent of that young baby watching, through glass, as it undergoes an operation. Oven-spring is what you’re looking for: the yeast doing its thing, lifting and slightly scalloping the edges of the loaf at the bottom, puffing the top, easing open that slash you made — the yeast offering up a last great burst of energy in the rising warmth, never more alive than just before the heat kills it. No other form of cookery, to me, is as profoundly satisfying as the baking of sourdough bread. I know that I’m not alone. There are a lot of bread-heads about, and disproportionately, these bread-heads seem to be men. It’s men who get really excited about bread, its nuts and bolts, its existential appeal. We are going through an era of bread obsession fuelled by celebrity bakers: Paul Hollywood, Dan Lepard, Richard Bertinet, Andrew Whitley, the Fabulous Baker Brothers, the Hairy Bikers. On Twitter, the hashtags #breadporn and #realbread tell their own story: one of mouthwatering Instagrammed crusts, close-ups of crumb-structure, and intricate golden landscapes where the slashed tops of loaves have heaved open and caramelised in the oven’s heat. Those are mostly boys, too. It’s a corner of the culinary life that (other than showing off for dinner parties and with barbecues) really gets the boys going. So what’s the appeal? It’s not greed. Okay, that’s not true. It is greed. In my case, as in many others, a prime attraction is that I really, really like to eat bread. As a last meal, I would probably be happy with bread and butter — assuming the bread was an absolutely shit-hot sourdough, just sliced; or something beery and malted and tangy with rye, slathered with proper French butter with salt crystals in it (unsalted butter is an ingredient for cooking, not a foodstuff for eating). Lots of women — thanks to the body-fascism of the ambient patriarchal discourse, obviously — regard bread with suspicion. ‘Empty carbs,’ says my wife (when she’s not scoffing it). ‘Staff of life,’ say I. But there’s more to it than greed. It goes deeper. I love to cook all sorts of things, and it’s not as if I’m indifferent to the pleasures of eating those things either. Twenty-four-hour cooked shoulder of pork? Poached eggs with asparagus? Orecchiette pasta with sausage, chilli, garlic, cream, Parmesan and tenderstem broccoli? Sign me up to that shizzle. But bread? Bread’s different. There’s something atavistic about it. There’s the simplicity, to start with. Bread contains flour, water, yeast and salt. That’s it. Even yeast and salt are, you could argue, optional extras. Putting a loaf on the table in front of your family — this beautiful, aromatic, crusty, porous, individual-as-a-snowflake hunk of sustenance — is Neolithic stuff. Our hunting fathers weren’t faffing about with orechiette and tenderstem broccoli. But they were making bread, and making it in exactly the same way. Bread has been a staple of the human diet for 12,000 years — leavened originally by the wild yeasts that battened onto dough left overnight to rest; latterly by the by-products of brewing and wine-making. The beginnings of civilisation as we know it, the formation of towns and cities, based around farming rather than foraging, went hand-in-hand with the cultivation of wheat and barley. Before we were people of the book, we were people of the loaf. Homo fecit panem, to adapt the phrase, et panis fecit hominem. For some, the Neolithic thing extends into the physicality of making it. I have one friend, a disciple of the artisan bakery guru Andrew Whitley, who positively glows when he describes how the bread he makes is salted by the sweat that drips off his forehead during the process. Whitley suggests air-kneading: essentially, stretching the dough between your two hands for 10 minutes at a time like some monstrous doughy harmonium, rather than squidging it across the worktop. Once it’s giving off that superb boozy smell and bubbling away evilly, it can live forever But I am a lazy sort. A Lepard-ite. Dan Lepard’s The Handmade Loaf (2004), with its wonderful photographs and breads from all round the world, is the book I cook from more often and with more pleasure than any other. His big idea is that gluten will do its thing on its own, more or less, if you let it: rather than ask for brow-sweat, he often invites you to knead for just 10 seconds at a time, at intervals during the proving process. There are aspects to the art of baking that are almost purpose-designed to appeal to a geeky sort of male mind: not just the prehistoric farmer but the scientist. Professional bakers don’t talk about recipes: they talk about formulas, and those formulas are expressed in percentages. They talk about hydration, gluten, ash content, the temperature of the water that needs to be added and how the temperature of the dry ingredients is modified by ambient room temperature. They talk about retarding fermentation in the fridge. My sister (it’s not all boys), who trained as a professional baker in New York and now works in the kitchens at the London branch of Balthazar, knows a whole series of calculations that will allow her, say, to use plain instead of strong white flour to achieve the same results. And yet it would be a mistake to think of breadmaking as being like Walter White cooking a batch of crystal in the US drama Breaking Bad: a matter of cold numbers and mechanical exactitude. However precise you are, gram by gram, in working out your ingredients, there’s also an art to it. Yeast is alive. The best bakers undoubtedly have what David Foster Wallace, writing about tennis, called ‘touch’. You need to be able to ‘feel’ the dough — to know with a prod of the finger when it’s perfectly proved; to evenly flour a surface with just the right flick of the wrist; to handle a really wet dough, like ciabatta, without letting it tangle and stick. Then there’s the loveliness of the kit: the peels, the dough-scrapers, the lames and grignettes for slashing the dough before it goes into the oven, the bannetons, the couches (aka tea-towels) in whose couche ruches your baguettes, or ficelles, will prove straight and proud. Let us say that those who do not proudly arrange drill-bits and adjustable wrenches in our garden sheds might have a weakness for a nice couche. The geeky side of the phenomenon is exemplified, perhaps, by the American academic Steven Kaplan — author of the enormous book Good Bread Is Back (2006), which describes the 1990s return of artisanal breadmaking to Paris – and what makes for a good baguette – in more detail than you could possibly have thought necessary. But what, I think, links the Neolithic man to the geek is this, and it was there in my first sentence: making bread puts you in a relationship with time. It takes several days to get a sourdough starter or levain (I called mine ‘Bernard’) going from scratch. And once it’s giving off that superb boozy smell and bubbling away evilly, it can live forever. Nobody knows who has the oldest, but there are several claims made for starters that have been going for well over a century or even two. More directly, there’s the time it takes to make the individual loaf — the mixing, fermenting, kneading, proving, proving again, shaping, rising and final introduction to the oven. If you’re making sourdough and start at 7am, you can have bread just about ready — still cooling — in time to accompany supper. When you are in a relationship with time, you are in some sense meditating; the repetitive physical process of kneading (or, for the Lepard-ite, kneading and reshaping, kneading and reshaping) leaves your mind wonderfully uncluttered and attentive. You are working at the loaf’s pace, and you draw from it exactly the satisfaction that fishermen draw from fishing. Michael Chabon once wrote that a baseball game was not so much a sporting event as ‘a great slow contraption for getting you to pay attention to the cadence of a summer day’. Sourdough bread, for me, is just such a contraption. If you have time to bake a loaf of bread, you are the richest man in the world. | Sam Leith | https://aeon.co//essays/time-to-bake-a-manly-loaf-of-sourdough-bread | |
Architecture | Most architecture sets out to make us civil and efficient. Where are the homes that give us passion and pleasure? | Morningside, a late-Victorian suburb on the south side of Edinburgh is an extremely good-looking place, possessing an architectural integrity rare in Britain today. Never threatened by wartime bombs, post-war developers, or the vicissitudes of the housing market, this suburb has a direct line to the ‘Victorian city’ — and its morality. Its moral character is there for anyone to see: in the bay windows watching over every inch of street, the church on every corner, and the sheer solidity of the stone. Morningside is propriety in built form. The suburb’s respectability was a huge attraction for me at the anxious moment of buying a flat. But after a few years of living there, that same respectability had become a bore. Then it became oppressive. The buildings began to represent a desiccated social life, defined by emotional reserve and obligation. Patrolled by curtain-twitching killjoys, Morningside seemed determined to put a stop to fun of any kind. In retrospect, Morningside itself probably had little to do with it. Moving there coincided with the moment at which my wife and I became fully grown adults. It was a structural problem. With two careers, two kids and no money, there was little time for pleasures, sex included. Of course, we bore it all stoically and, after a while, we learned together that this was simply what adult life was like, a mess of contradictory demands, with neither the time nor the space in which they might all be satisfied. We were hardly alone: every other couple we knew seemed to find themselves in the same situation. Still, our feelings were real enough, and being an academic, I set to reading about them. Sex wastes time, needs space, and is inhibited by too much intimacy It’s odd how little architects have had to say on the subject of sex. If they’re routinely designing the buildings in which sex happens, then you might expect them to spend more time thinking about it. Buildings frame and house our sexual lives. They tell us where and when we can, and cannot, have sex, and with whom. To escape buildings for sex — to use a park, a beach, or the back seat of a car — is a transgression of one kind or another. Most of us keep sex indoors and out of sight. An important early find in my reading was Mating in Captivity (2006) by Esther Perel, the New York-based sex therapist. According to Perel, sex wastes time, needs space, and (most intriguingly) is inhibited by too much intimacy. All these things have implications for architecture, which in the West has been coloured by the language of efficiency for at least a century. By contrast, in Perel’s terms, sex was profligacy and decadence. She also remarked that ‘sexual desire and good citizenship don’t play by the same rules’. This struck a chord. I had long been bothered by the architectural concern for civility. The sudden proliferation in the 2000s of National Lottery-funded public spaces in the UK seemed to be rooted in a longing to return to Edwardian times, with all the attendant anxieties about sex and class. This longing was abundantly clear in Foster and Partners’ redevelopment of Trafalgar Square (2003): a magnificent architectural project, but one that limited human behaviour to the polite promenade. Perel’s understanding of the limits of civility, from a sexual point of view, helped me to form a powerful critique of architecture. In sum, architecture was principally about order; sex was not. In pursuit of the relationship between sex and architecture, I read the classic theories of sex: all of Freud (alert to what he had to say about bourgeois Vienna), Havelock Ellis, Auguste Forel, Richard von Krafft-Ebing, Alfred Kinsey, and Michel Foucault. I especially liked the sexual libertarians Herbert Marcuse and Wilhelm Reich, both of whom revised Marx to envision an erotically liberated society free from bourgeois constraints. For Marcuse, who was born in Berlin at the turn of the 20th century, the revolution’s barricades would be manned by sexual libertines rather than petrol bomb-tossing students. I idly wondered about starting a local Marcusian sexual revolution. If architecture is a physical representation of the society that makes it, then, in a Western context such as ours, it is bound to be designed to keep the lid on sex. Freud’s repressive hypothesis said it all: he thought the libido was an irresistible, almost hydraulic force that must secure an outlet to avert catastrophe. Freud’s theory might have been crude, but I found it persuasive. It was what I felt a lot of the time, just like countless middle-aged men before me who have suddenly realised they have become — without ever meaning to be — civilised. That sense of sexuality as a biological force was also, I knew from many conversations over the years, what psychotherapist friends routinely dealt with in their consulting rooms. Sex wasn’t experienced as an abstraction, but as a biological drive that made patients fall in love with colleagues, buy sports cars and wear age-inappropriate clothes: the usual midlife crimes. Life, as far as Niemeyer was concerned, was women, the beach, and beer Had there ever been an architecture that encouraged a more open sexuality? Modernism had a decent stab at sexual utopia. An early modernist icon by Rudolph Schindler, the Kings Road house in Hollywood (1922), is a free-flowing inside-outside building with open sleeping areas, designed for two young couples. There is also the Isokon block of flats in London (1934) — a part-Japanese pseudo-commune, designed by Wells Coates to be both exotic and erotic. And Le Corbusier’s huge Unité d’Habitation complex in Marseilles (1952) was designed around the display of the body — its pools and terraces meant for inhabitants to show off. But the eternal erotic paradise is Brazil. Brazil’s modern architecture was influenced by sex like nowhere else. Oscar Niemeyer’s work routinely invoked the female body. His signature curves on the Casa das Canoas (1953) in the southern Rio hills, are meant as a corporeal metaphor. I interviewed Niemeyer in 2001 in Rio de Janeiro for a book I was writing about Brazil, and I was struck by his almost manic libido (he was 93 at the time). Above his desk hung a photograph of two spectacularly endowed naked women on the beach. The conversation returned, time and again, to their aesthetic qualities. Thinking of my book, I tried, helplessly, to get him back on track. It was a hopeless task. Life, as far as he was concerned, was women, the beach, and beer. And why not? He lived and worked in Copacabana, the iconic beachfront suburb of Rio where body worship is a way of life. However, Niemeyer’s modernist experiments, and those of his contemporaries, did not change the world. In practice, their buildings had too much invested in bourgeois sexual mores. More promising sexual utopias came in the form of a variety of communes, such as the one conceived by the behavioural psychologist B F Skinner. His novel Walden Two (1948) might be a wretched book — stilted and programmatic — but it is fascinating for the detail with which it imagines collective living, not least for what it has to say about sex. And it’s interesting for two other reasons — it was the inspiration for Twin Oaks, a real-life community established in 1967 in Virginia, and for Walden 7, a striking avant-garde apartment complex designed in 1975 by Ricardo Bofill on the north-west periphery of Barcelona. Skinner’s novel described a commune led by a grumpy sociopath called Frazier, who gets a visit from an academic named Professor Burris (a cipher for Skinner himself) who is sceptical to begin with but is eventually persuaded by the commune’s merits. Walden Two is fascinating on the subject of sex: on the one hand, the commune proposes a radical openness, whereby sex is a part of everyday life, and positively encouraged for teenagers, with young marriage strongly advocated. On the other hand, Skinner clearly expected such tolerance to result in the disappearance of the libido, to be replaced instead by other interests. So we have Frazier describing with approval the sexless marriage of adults, where couples live happily in separate rooms and direct what is left of their libidos to higher pursuits. For example, almost everyone at Walden, Frazier included, is a virtuoso pianist. Meanwhile, sex at Walden is juvenile and procreative; once children appear, it has no place. The middle-aged Frazier lives a quiet bachelor life, untroubled by desire. Walden Two is an imaginative experiment in the great Western utopian tradition. However, it doesn’t say a great deal about the physical form of the settlement; if anything, it is downplayed. In one of the final scenes, Frazier reluctantly shows his room to his visitor. It’s a scruffy, disordered scene — his unease indicates the room’s essential lack of importance. There’s nothing to see, nor, implicitly should there be. To be concerned with appearances is superficial. Walden Two is as anti-architecture as it is anti-sex. One of the most architecturally striking communes in real life was Drop City, established in southern Colorado in 1965. It consisted of eight geodesic domes built from metal salvaged from the roofs of wrecked cars, nailed to wooden frames. The design was inspired by the architect Buckminster Fuller — Drop City’s founders had seen him speak the year before at the University of Colorado. Impressed, they adopted the dome form (Fuller returned the compliment shortly afterwards, sending the Droppers a cheque for $500, then a significant sum). For a brief time, culminating in the commune’s Joy festival in June 1967, Drop City featured on a metaphorical counter-cultural world map. It tried to model a new form of society with a radical built form; meanwhile, most other intentional communities lacked clear architectural form, being mainly pragmatic adaptations of existing buildings. Drop City implied that the form of buildings conditioned behaviour and vice versa. Drop City, an experimental, counter-cultural community based around cheaply constructed geodesic dome structures, Trinidad, Colorado, 1967. Photo by Carl Iwasaki/Time & Life Pictures/Getty.For that reason, it attracted attention from the architectural press. The domes seemed simultaneously to be of the far future, and the distant past, while responsive to a present condition defined by plenty of real, as well as imagined, crises. There was something indubitably sexy about all this. In the fearful bomb culture of the late 1960s, Drop City proposed living in the immediate present. It was reminiscent of the English experience of the Second World War, its unprecedented relaxation of sexual mores — the end of the world was at hand, and one simply took opportunities as and when they arose. The ‘Droppers’, as Drop City’s founders were known, were well aware of this: Gene ‘Curly’ Bernofsky spoke of a dome that would house a psychedelic experience, inducing a state of ‘constant orgasm’. Peter ‘Rabbit’ Douthit maintained a harem in his dome. He was the one Dropper with a talent for publicity, and one of his first acts was a film depicting himself being fellated by another resident. When it was displayed in a Santa Fe art gallery just over the state border in New Mexico, it brought Drop City instant notoriety, and a reputation for sexual libertarianism. It was, he told me, the sexiest building he had ever stayed in. It had to do with the vaguely transgressive quality of camping in London But, for the most part, the Droppers were mild-mannered, earnest folk with little interest in excess, libidinal or otherwise. The great polymath Jacob Bronowski was an unlikely admirer, praising Drop City for its resourcefulness and work ethic in the BBC documentary The Ascent of Man (1968). One Dropper, when asked about the commune’s sexual ethics, spoke of a ‘straight middle-class deal’ based on monogamy, and this view is corroborated elsewhere. Drop City had the look and reputation of an erotic paradise, but the reality was duller, at least in the early days. After the Joy festival it went to hell. A predatory older man moved in, attracted by easy access to teenage girls; bikers held regular parties; and, one by one, the founders moved out. By 1969, the party, to all intents and purposes, was over. However, Drop City has remained an icon. Alex Hartley is one British artist who was drawn to the commune and its myth, and in 2011 he set about reconstructing one of the Drop City domes for an exhibition at the Victoria Miro gallery in London (that he managed to build anything at all was some achievement, given the almost total lack of reliable documentation of the original). Hartley installed the dome in the gallery’s back yard, and took up residence, three days a week, for the duration of the show. It was, he told me, the sexiest building he had ever stayed in. It had to do with the vaguely transgressive quality of camping in London, I think. I didn’t quite get it myself. When the show closed, he gave the dome to the Occupy London movement, then in the Finsbury Square iteration of their protest. It lasted a day or two as a semi-public space, a sort of impromptu town hall. Then, over the ensuing months, it was subject to some remarkable abuse. When the dome returned to Devon where Hartley had made it, he found to his horror that it had become coated in what he called ‘vomit lucky dip’ — the contents of an exploded bean bag mixed with faeces, used condoms, syringes, and sex toys. So his gentle, English re-creation of Drop City climaxed in the same libidinal chaos as the original. Sex — ‘the evil black snake’ as the Dropper Peggy Kagel tellingly described it — seemed to threaten just about every other form of intentional community I read about, from the dom kommuna of the early USSR to the kibbutzim of Israel. The communes that survived best seemed to be those — such as Twin Oaks, Virginia — whose members accepted, as far as I could tell, the sublimation of sexual energy into labour. We live such long and varied sexual lives, we deserve architecture that enables and supports us. Yet in the context of our present predicament, when housing is such a major cost, we’re highly risk-averse. We treat housing as a capital investment, rather than as something with which we can experiment. Even if we don’t own property, we spend so much money on rent that we don’t take chances. I wish that we did. For me, the ideal would be some form of co-housing, the best-known example being Sættedammen in Denmark, established in 1972 (with the founding creed: ‘Children should have 100 parents’). It occupies the right space between the wilder forms of intentional community, and market-dominated individualism. It doesn’t explicitly challenge sexual norms. However, by providing shared facilities (childcare, gyms, swimming pools, saunas, rooms for parties), it provides time and space to play, and addresses the deficits that Esther Perel identified as inhibiting our sexual lives (sex loves to waste time, remember). But I’d add some sort of therapeutic role, too. If we were to live more communally, we would need help to resolve inevitable interpersonal conflicts. The odd thing is that we already strongly value co-housing, albeit in an occasional and time-limited form. University students live like this, and we do the same thing on holiday; both forms seem to provide a better emotional environment in which to explore and develop primary relationships — including sexual ones. If we can accept such communal living for some of our lives, why not the rest of the time? Then we might have an architecture that actually supports, rather than impedes, our sexual lives. | Richard J Williams | https://aeon.co//essays/can-architecture-improve-our-sex-lives | |
Ethics | Vegetarian and vegan ethics are not a recent fad in Asia, but a longstanding human aspiration and virtue | It’s not so long ago that George Orwell, in The Road to Wigan Pier (1937), called vegetarianism an affront to ‘decent people’ and the obsession of the ‘food crank… out of touch with common humanity’. It was, he thought, a symptom of the hijacking of the socialist cause by ‘every fruit-juice drinker, nudist, sandal-wearer, sex-maniac, Quaker, ‘Nature Cure’ quack, pacifist, and feminist in England’. Of course, times have changed, and even if not a majority view, being a vegetarian in the West is no longer a fringe belief. As my friends sometimes say: ‘At least you’re not a vegan’. Orwell’s opinions would have struck the common humanity of South Asia — where hundreds of millions of perfectly normal people were (and are) strict vegetarians — as absurd. Vegan and vegetarian ethics may still be considered highly idealistic in Western cultures, but in many parts of Asia, they are but recent manifestations of a long-standing human quest: to lessen the suffering of animals and express our power through self-restraint rather than self-indulgence. Fifteen hundred years ago, the great Chinese emperor Wu, of the southern Liang dynasty, made philosophical arguments about the immorality of exploiting animals for human pleasure, urging temperance and clemency towards them. He was inspired in turn by the Mauryan emperor Ashoka, who had a change of heart after ravaging the east-Indian republic of Kalinga in 260BC. Horrified by the death toll waged by his own army and tormented by the memory of Kalinga, he accepted Buddhism, abjured violence, abolished the slave trade (although not slavery) and dedicated his reign to overhauling cruel customs. Ashoka’s laws, the first of their kind, extended the state’s protections to animals. They banned blood sports and outlawed the ritual sacrifice of animals. ‘Here (in my domain) no living beings are to be slaughtered or offered in sacrifice,’ declared one of his major edicts, inscribed on a rock in Gujarat. It went on to explain that, where once ‘hundreds of thousands of animals were killed every day’ in Ashoka’s own kitchens, now only ‘two peacocks and a deer are killed, and the deer not always’. ‘And in time,’ it promised, ‘not even these three creatures will be killed.’ Unlike Ashoka, Emperor Wu had no spectacular feats of bloodshed for which to atone: his reign (464-569CE) was notably stable and prosperous. Rather he was inspired by Buddhism, emanating from India, which was spreading rapidly through China. Although it was becoming institutionalised as a religion, its essence was a radical charter for social reform and spiritual renewal. The Buddha himself had rejected God, condemned religion as an exploitative racket and urged his followers to honour the living. Influential monks in China now began expanding this injunction to include animal life. Unlike the Hindu clergy, who restricted public access to liturgical ideas, Buddhists took their convictions to the laity, and there are captivating accounts of ordinary Chinese people, spurred on by the sermons of the monk Zhiwen, freeing their animals and burning their fishing nets. Wu convened conferences and wrote and circulated essays. He invited criticism from ministers and monks. And then, at the peak of his power, he embraced Buddhism, becoming the first ruler of his realm to forsake flesh in his diet. He banned capital punishment and urged his subjects to renounce meat, to give up hunting and fishing and butchery, and to adopt compassion and abstemiousness — not as a rejection of human supremacy, but as its highest affirmation. Europeans read with amazement about hospitals given over entirely to the care of animals In sixth-century China, Wu’s imperial kitchen is said to have created seitan, known in the West today as ‘mock meat’. ‘Mock’ animals started to be used in ceremonial sacrifices. With the ascent of the Sung dynasty 400 years later, seitan became, according to H T Huang, the favoured food of the period’s literati. It was even extolled in verse by the poet Wang Yen: ‘It has the colour of fermented milk/ And a flavour superior to chicken or pork.’ Wu and Ashoka did not immediately realise their ambition to eliminate the suffering of animals, yet they helped to make the idea of vegetarianism itself respectable and indeed conventional, at least in much of India. When Mohammed Akbar, the mightiest emperor to rule India since Ashoka, said plaintively in the 16th Century that he wished meat eaters ‘would have satisfied their hunger with my flesh, sparing other living beings’, he was honouring that very longing. Europe, of course, was a different matter. Pythagoreans may have practised a meat-free diet in ancient times, but Christianity did not promote the virtues of refraining from eating animals (except as a form of monastic ascetic practice). Convinced that meat was vital for good health, Europeans travelling to India from the 16th century on were astonished to find a highly sophisticated civilisation with an ethic of non-violence towards animals. Some discovered, to their amazement, hospitals given over entirely to the care of animals. Ralph Fitch, an English merchant who travelled through the subcontinent in the 16th century, recorded that Indians ‘will kill nothing’. Hearing such travellers’ tales, Voltaire praised Indians as ‘lovers and arbiters of peace’, enthusing about their treatment of animals and bringing eastern culture into the mainstream of intellectual debate at home. In his epistolary novel, The Letters of Amabed, Voltaire mocked the incongruities of Western high culture through the eyes of a young Indian visitor to court. ‘The dining hall was clean, grand and tidy… gaiety and wit animated the guests’, the visitor observes, only to find that ‘in the kitchens blood and grease were flowing. Skins of quadrupeds, feathers of birds and their entrails piled up pell-mell, oppressing the heart, and spreading contagion’. Not all were impressed with vegetarian ethics, however. In the 17th century, the German Jesuit scholar Athanasius Kircher had launched an attack on Indians (and Chinese and Japanese) for eating, as he phrased it, ‘nothing from a living animal’, a practice he saw as un-Christian. He blamed their ‘abominable’ behaviour on a ‘very sinful Brahmin imbued with Pythagoreanism’ — the Buddha presumably. ‘The moral progress of humanity, Leo Tolstoy wrote in 1892, ‘is always slow’ and Kircher’s intolerance seems to eclipse Voltaire’s openness in the food debates of our own, secular age. If the skin and entrails in the royal kitchen of Voltaire’s fancy are capable of disgusting us, what of factory farms? I am puzzled by the lack of outrage. The occasional book or documentary film; pleas by writers such as the Anglican priest Andrew Linzey and the American journalist Matthew Scully; scattered protests by activists: all of these vanish before the colossal budgets spent to make the slaughter of animals on an industrial scale appear perfectly acceptable. In 2009, a Gallup poll found that 96 per cent of Americans believe ‘that animals deserve at least some protection from harm and exploitation’. But it’s hard to see this as anything but paradoxical when the dispensability of animal life is so central to the American food system. A theoretical attachment to the idea of animal rights does not mean that we recognise the essential point: that animals have the desire to live and we, being superior to them, have the agency to recognise it and the potential to honour it. It is one thing to proclaim yourself indifferent to animal rights altogether. Such a person does not pretend to care. I am much less sympathetic to those who attempt to reconcile animal rights and meat-eating by proclaiming themselves ethical, organic carnivores — who only eat ‘humanely’ slaughtered animals. Especially repellent is the way meat-eating is glamourised among food writers and celebrity cooks. When the author BR Myers surveyed the writing produced by some of our most revered gourmets he revealed a remarkable lack of concern about the brutal reality of animal slaughter. In her memoir Blood, Bones and Butter (2011), the chef Gabrielle Hamilton wrote: ‘It’s quite something to go bare-handed up through an animal’s ass and dislodge its warm guts.’ The food critic Jeffrey Steingarten vividly detailed the 20 long minutes it took four men to kill a pig, while British chef Fergus Henderson revels in eating the whole pig, Nose to Tail. It’s not difficult to find this almost lascivious approach to handling fresh meat and animal organs: it is ubiquitous in cookery shows on television, the recipes and columns in Sunday newspapers, and in the advertisements that reach us through every medium. Against a backdrop of such carnage, tossing a haughty epithet at McDonald’s or eating only the flesh of ‘ethically raised’ animals barely registers as tokenism Some food writers even argue that it would be irresponsible to allow livestock varieties to die out, and this would be the consequence of a vegetarian revolution in eating habits. This is cruelty masquerading as concern. It’s an argument Matthew Scully heard frequently as he travelled through America to research his book Dominion: The Power of Man, the Suffering of Animals, and the Call to Mercy (2002). ‘The worst thing you can do in North Carolina,’ a farmer told Scully when asked why he wouldn’t free his pigs, ‘is leave animals in the cold.’ It seems to me that the worst thing you can do to pigs is to raise them for slaughter and keep them in intensive factory farms along the way. More than 53 billion land animals will be slaughtered this year to feed our appetites. Against a backdrop of such carnage, tossing a haughty epithet at McDonald’s or eating only the flesh of ‘ethically raised’ animals barely registers as tokenism. Vegans have decided that the only acceptable response is to give up all animal products. We can ridicule them, but for all the apparent severity of their philosophy, they give expression to our finest instincts. The truth is, nobody now needs to eat meat, wear fur or use animal products in order to survive. We treat animals the way we do because we can. In the summer of 2004, Americans across the US observed the centennial of the late great Yiddish author Isaac Bashevis Singer. Singer’s writing, rooted firmly in the Hasidic traditions of pre-war Poland, transcended its origins and brought, as the Swedish Academy said when it awarded him the Nobel Prize in Literature in 1978, ‘universal human conditions to life’. But in event after event in 2004, Americans succeeded in applauding Singer while neglecting the central concern of his life, the only subject that ever became his ‘religion’: our treatment of animals and the self-deceptions that sustain it. No other writer or activist in the 19th or 20th century, not even Gandhi or Tolstoy, was as deeply affected by the condition of animals as Singer. It’s a theme that permeates his entire oeuvre: almost every one of his major characters is either a vegetarian or is about to become one. Take Herman Broder, the Holocaust survivor and irrepressible roué in the novel Enemies, a Love Story (1966). When offered a rooster, he refuses: ‘For some time now he had been thinking of becoming a vegetarian.’ Singer wryly draws attention to the irony of our most intimate rituals, in which human suffering and emancipation are memorialised over the flesh of tortured animals. ‘A fish from the Hudson river or some lake,’ he writes, ‘had paid with its life so that Herman should be reminded of the miracles of the exodus from Egypt. A chicken had donated its neck to the commemoration of the Passover sacrifice.’ One commemorative event, in Pennsylvania, did remind participants of ‘Singer’s strict vegetarian diet’, but the philosophical underpinnings of his choice were left unexamined. It was liable to be seen as a simple dietary preference, a personal quirk. But as Janet Hadda reminds us in her biography Isaac Bashevis Singer: A Life (1997), ‘his determination not to eat flesh was connected to post-Holocaust feelings of revulsion against human cruelty, misuse of power, and disregard for life.’ She continues: ‘During the crucial years of the Holocaust, Bashevis came to believe that, by eating meat, he was condoning the killing of innocent living things.’ Singer himself explained that seeing ‘how little attention people pay to animals, and how easily they make peace with man being allowed to do with animals whatever he wants’ brought him ‘misery’. It ‘exemplified’, in his eyes, ‘the most extreme racist theories, the principle that might is right’. This is probably why in ‘The Letter Writer’, a short story published in 1968, Singer described the reality of animals as ‘an eternal Treblinka’. Such a comparison is unthinkable to many. At the time of his centennial in 2004, Singer was damned by Allan Nadler, director of Jewish Studies at Drew University, for eating blintzes and dreaming of ‘Polish whores and Yiddish devils’ while others were ‘fighting Nazis with the partisans in the Lithuanian woods’. But Singer, whose mother and younger brother perished in Soviet labour camps, was not belittling the Holocaust: he was invoking it to cast light on humanity’s capacity to commit limitless atrocities against the powerless, without so much as wrinkling its own exalted self-image. The extraordinary technological leaps of our own time have confirmed human dominance over the natural world as never before. But, as Isaac Bashevis Singer taught, this is at best a fragile insurance against unreason, and technological progress has never prevented us from plunging abruptly into chaos and carnage. Humans have always shown immense capacity for destructiveness, but also for reform and restraint. We should strive to temper our dominion with mercy and compassion. To insist on this, as some in our midst do, is not to hate humanity. It is to urge a more empathetic manifestation of our authority in hope that, by becoming responsive to the suffering of those over whom we have the power of life and death, we may escape our primordial propensity for violence altogether. | Kapil Komireddi | https://aeon.co//essays/mercy-toward-animals-runs-deep-in-asian-cultural-traditions | |
Astronomy | The Universe seems to be bursting with planets, and this is profoundly important — but not in the way we might expect | The rites of spring are many and varied. As a child in rural England, I was once given the chore of finding and rearranging the bulbs of a long-unattended flowerbed. I’m not sure if spring was a wise time to do this from a horticultural point of view. It seemed to me that, having survived the rigours of winter, these hardy little tusks of plant matter probably wanted to wait undisturbed for the Sun’s warmth to penetrate the blanket of earth above them. But such was the issued command, and so I began to brush away last year’s dead leaves and timidly poke about in the rich alluvial soil. To my small self, this patch of ground seemed huge and vacant. That is, until I happened on a tiny unyielding clump, barely distinguishable from the grainy clods around it. I’d discovered my first precious bulb. It produced a momentary surge of optimism, this bulb. I had searched only a short time and already here was one of my quarry. A somewhat grim afternoon activity was transformed into a promising expedition. There might be a great population of these slippery living forms hiding in the soil. I imagined my grubby little fingers feeling for them, finding them. I imagined how my small trowel would lever up proud handfuls. Great riches and accolades awaited me. Except that is not what happened. There were no other bulbs, no other signs of life. As I dug, poked and prodded with increasingly sore hands, the already intimidating spread of dark earth transformed itself into a barren expanse of cosmic proportions. How could this be? If I had found one juicy bulb merely minutes into my search, surely this hinted at a wealth of others? The incident with the bulbs stung my pride. Its puzzling aftertaste didn’t fade until long into summer, when all manner of new things took over my life. But I have had many occasions to remember it in adulthood because it speaks to one of the most fascinating, challenging and frustrating questions that astrobiologists such as myself confront every day in our quest to find life elsewhere in the Universe. There is a commonality between the puzzle of a lonesome bulb in a mass of soil and the puzzle of whether or not we’re alone in the cosmos. Until quite recently we knew of only one life-harbouring planet in a single planetary system — adrift within a universe of more than a billion trillion stars. Our home was that single speck, the lone bulb in a great cosmic garden, and it raised essentially the same question: is this all? Or are there more? We might imagine that our very existence in this vast universe makes the existence of other life a foregone conclusion, or at least very likely. But that conceit is profoundly misleading; it’s a victim of one of the most challenging aspects of statistical inference and probability. It’s an example of post-hoc analysis or a posteriori probability — that is, the evaluation of the significance of events that have already happened. This is a treacherous terrain, a place where statisticians know to tread carefully, because rare and common events are indistinguishable once they’ve occurred. And this caution is especially important for those phenomena for which we have few or no precedents. Just because life did arise on earth says nothing, in itself, about how likely it is to arise elsewhere. As I dug in the garden that spring, I knew that other bulbs existed elsewhere in the world. But, until recently, we did not know for sure that other planetary systems existed at all, or what their abundance was, or what the potential for ‘habitable’ planets was. We certainly had our suspicions, some scientific, and some not, but no firm evidence one way or the other. This impasse has now been broken, in a most dramatic fashion, and the implications are extraordinary, for reasons you might not expect. Since the ancient Greek atomists and the upending of a solipsistic worldview by Copernicus, we’ve toyed with the notion of a plurality of worlds, the idea that the Universe is brimming with planets and the stuff that might be on them. But, apart from a few false starts, astronomy was hard-pressed to detect other worlds around even the nearest stars to our Sun. The surrounding cosmos has, when it comes to planets, been a great mound of barren soil as unprepossessing as my childhood garden. This began to change in the early 1990s. First came the confirmation of planet-sized objects orbiting a distant pulsar — the fast-spinning, ultra-dense remains of a stellar core, left behind from an ancient supernova. Pulsars tend to beam out their pulses of radiation into the Universe with astonishing regularity, but this one’s pulses exhibited small variations, tiny, time-stamped changes that revealed the gravitational tugs of planetary bodies. These are likely the zombie remnants of matter left after the star’s explosive death millennia ago, now re-coalesced into entire worlds. Unexpected and strange, these were the first hints of what was to come. Just a few years later, in 1995, a giant planet was detected orbiting a regular Sun-like star known as 51 Pegasi. This measurement was a feat of spectroscopic detective work, which registered the fine Doppler shifts induced on the star by the gravity of an unseen world. These new planets, and the ones to immediately follow, were utterly alien. The half Jupiter-sized companion to 51 Pegasi orbits once every 4.2 Earth-days, well within the orbit of Mercury in our own system. The next two planets discovered around other stars, 70 Virginis b and 16 Cygni Bb, further disturbed any tidy theory about the structure of solar systems. They have highly elliptical orbits that swing them to and from their parent stars with wild abandon. It was too soon to construct statistics about the abundance of these ‘exo’ planets, but the writing was on the wall: the cosmos makes worlds with an extraordinary diversity. Since that time, the number of planets known to us has swelled beyond the dozens, the hundreds, and now teeters above a couple of thousand — maybe more, depending on how confident we are in our (largely indirect) methods of detection. These are the subtle Doppler changes in the spectral hues we see as a star tangos about its system’s fulcrum point, or the minuscule dip of light that occurs when a planet transits the unresolved face of its star, dimming the light we see by perhaps as little as 0.003 per cent. Or even the hour-by-hour changes of a distantly glowing stellar backdrop, as the gravitational distortion of a foreground star and its planets focuses and brightens the surrounding field of view. These ridiculously difficult measurements have become state-of-the-art components in the ultra-sophisticated technological machinery of planet hunting. Indeed, the greatest barriers to expanding our catalogue of other worlds have less to do with our devices and more to do with the fact that stellar astrophysics is a messy business. The upwelling and down-welling of plasma on a star’s visible surface produces measurement errors and statistical noise that can obscure the delicate effects of surrounding planets. Especially those like Earth, whose own presence induces a mere 9cm per second motion on our Sun. Finding planets can be like looking for the gentle swaying of a wheatfield brought on by the beating of a bird’s wings, while all around a hurricane blows. Despite these limitations, we now know that the diversity of planet sizes and orbital configurations is enormous. For example, the most numerous types of planet seem to be those somewhere between the size of Earth and a few times larger. Since we’ve yet to peer very far into the pool of much smaller objects, this statistic holds true for now. There is a kind of Copernican surprise lurking at the heart of this statistic: it means that the most numerous type of planet is not represented in our own solar system. There could be about 23 billion stars in our Milky Way galaxy, each harbouring Earth-sized planets with life-friendly temperatures Many exoplanets also follow orbits that are far more elliptical than any followed by the major planets around our Sun. More puzzling still, the most frequent type of configuration, the one that has earned the moniker of ‘the default mode of planetary formation’, is that of closely packed worlds, on orbits that take mere days or weeks to loop around their stars. These compressed versions of our own system seem, for now, to be far more normal than our own. But if we’re not normal, what are we? That’s a question we can’t answer yet, because our census of stars and planets is still woefully incomplete. Nonetheless, there is something we can do with this wealth of data. We can make a statistical extrapolation from the worlds we’ve found to those we’ve yet to see. Although these different techniques of planet detection come with different biases and systematic effects, we’re sorting through the issues, and a consistent picture is starting to emerge, a picture of extraordinary cosmic wealth. Just how many planets are there? This is such an active field of research that it can be hard to know which study to quote, so I’ll simply take one of the more recent and representative examples, whose results are pegged to NASA’s Kepler mission, a telescope that has patiently monitored some 140,000 stars in a distant patch of the Milky Way, looking for those tiny dips of light as mosquito-like planets nip across the face of their parent stars. In this study, by the astronomers Courtney Dressing and David Charbonneau at Harvard, published in The Astrophysical Journal in February 2013, the focus is on stars smaller than our Sun. Not only is it easier to spot transiting planets around little stars, little stars are far more numerous than big stars. These ones, called M-dwarfs and K-dwarfs, are anywhere from half to one-10th the mass of the Sun, and there are 12 times more of them than of Sun-sized stars in our galactic neighbourhood. In other words, they represent 75 per cent of all stars in our galaxy, an excellent group upon which to work up population statistics. From the data we have now, it looks like planets ranging from half of Earth’s width to four times larger appear to be circulating around 90 per cent of these small suns, with orbits of 50 Earth-days or fewer. Bigger planets, and planets on larger orbits — well, there are going to be plenty of those, too. The estimated number of Earth-sized planets orbiting around these stars at a distance that allows for the possibility of a temperate surface (capable of holding that biological elixir — liquid water) varies from study to study, but it’s coming in at about one per seven systems. Given the concentration of these small stars locally, there is a 95 per cent probability that one of these potentially temperate worlds sits within a mere 16 light years of Earth. There could be about 23 billion stars in our Milky Way galaxy, each harbouring Earth-sized planets with life-friendly temperatures on their surfaces. Twenty-three billion, give or take a few. This estimate tallies with others. Some studies produce numbers of Earth-sized planets closer to 17 billion, all in different orbital configurations. Others suggest a figure as low as 6 billion or so, but these are just the planets close to Earth in size. If we extend our reach to slightly larger worlds, the places now known as ‘super-Earths’, we’re back into the tens of billions. No matter how you slice the cosmic cake, you end up with a vast wedge of planets that we’d be happy to go and study, perhaps even land on and cautiously tiptoe about. These are mind-blowingly huge numbers. The cosmos makes temperate planets aplenty. It’s quite striking to see how past thinkers seldom separated the existence of a planet from the existence of life on it So what does this mean for us, and our puzzlement over our place in the Universe? It certainly tells us that Earth is likely one of many other planets in the cosmos with at least some attributes in common. But it feels like it should mean more. Doesn’t it mean something for our questions about whether there’s other life out there somewhere? It does, but not in the way we might first assume. From the time of the atomists of ancient Greece, the idea of a plentitude, even infinitude, of other places in nature, other worlds, has had considerable allure. More than 2,000 years ago, the philosopher Democritus is said to have written: There are countless worlds of different sizes. In some of them, neither the sun nor the moon are present; in others, they are larger than ours, still others have more than one of them… some prosper, others are in decline… Some worlds have no animal or vegetable life, nor any water at all.Here, the ‘worlds’ are really universes — conceptual, metaphorical constructs of other natural places. But ideas like these inspired the later atomists such as Epicurus and his followers to embrace more tangible models of plurality: notions of countless places more akin to real planets and Earth-like environments, all existing within an infinite space. Not everyone was on board, naturally. The idea that new Earths existed elsewhere was fiercely opposed by Aristotle, the great champion of geo-centrism. But when Copernicus issued his bold cosmological vision of a de-centralised Earth in the mid-16th century, a new generation began to wonder if the Universe might be filled with planets like ours. In Rome the priest, philosopher and scientist Giordano Bruno was burned at the stake for heresy in 1600. His heretical actions included his vigorous promotion of the idea that the stars were suns with their own planets. A bit later, in the influential book Entretiens sur la pluralité des mondes, or Conversations on the Plurality of Worlds (1686), the French polymath Bernard Le Bovier de Fontenelle mused on the possibility that all stars harboured worlds like those of our solar system. Piece by piece, these ideas made the Universe bigger and bigger, and our place in it smaller and smaller. Voltaire got in on the act, too. In his satirical work Micromégas (1752), giant beings from Sirius and Saturn puzzle over microscopic Earth and its tiny denizens, discovering to their surprise that humans actually have intelligence. But when these superior beings leave humanity a book that purports to explain the meaning of existence, it is found to contain only blank pages. The idea that the Universe is filled with planets drew supporters from a variety of intellectual disciplines. The natural philosopher Christiaan Huygens was an enthusiast, the astronomer William Herschel took to it, and so did the poet Alexander Pope, who wrote, in 1734: He, who through vast immensity can pierce,See worlds on worlds compose one universe,Observe how system into system runs,What other planets circle other suns,What varied Being peoples every star,May tell why Heaven has made us as we are.There is a remarkable commonality among all these dreams of plurality. All of them share the unquestioned assumption that planets, or plantlike environments, equate to life, and vice versa. In fact, it’s quite striking to see how past thinkers seldom separated the existence of a planet from the existence of life on it. Today though, astronomers constantly fret over whether or not a world is the right size, the right composition, or in the ‘habitable zone’ — the region around a star that allows for liquid water. We don’t assume that planets are necessarily occupied. Instead, astronomers and astrobiologists like myself spend our days trying to figure out if there are any planets with the right characteristics to harbour life. Whether we’re one in a billion, or the product of a predictable life-generating apparatus, the outcome looks the same to the observer after the fact Yet, for all of this scientific caution, we’re still wishful thinkers. We still want to be able to find life. Like many of my colleagues, I have, on several occasions, succumbed to that alluring view that planets equal life. But the crucial difference between us and past advocates of plurality is that we have information they never had. We actually know that planets are abundant, that they are an atomist’s dream no longer. And yet, the abundance of planets itself actually changes very little about the probability that life exists elsewhere in the Universe. Instead, what it does is fundamentally alter the nature of the question itself, and that in turn leads to an even deeper and rather surprising truth. To understand it, let’s return to my story of the lonesome bulb in a spring garden and bring it into the realm of the cosmos. Imagine, for a moment, an alternative reality. Imagine we discovered that the Sun was the sole planet-harbouring star in the galaxy, or for that matter the Universe. What would this actually tell us about the probability of abiogenesis, the spontaneous generation of life? It’s tempting to think that our existence would be proof positive that, so long as you have a planet, life is extremely probable. If it weren’t, the likelihood that the one planetary system in all existence would carry life would seem fantastical. But this is post-hoc analysis, which can be misleading. The fact that life exists on this imaginary solitary Earth tells us next to nothing about whether life is probable or highly improbable because if we hadn’t already emerged on this planet then we wouldn’t be able to ask the question in the first place. Whether we’re one in a billion, or the product of a predictable life-generating apparatus, the outcome looks the same to the observer after the fact. It would be just like my childhood self getting his hopes up because he’d quickly come across a bulb in one spot of a great and untilled patch of soil. Of course, this is not entirely fair. In the real universe, there is a teeny bit of information in the fact that microbial life appears to have emerged very early on Earth, just a few hundred million years after our planet assembled. This does seem to tell us a little about how life arises in the Universe, but as a single data point it places few meaningful constraints on the range of rates for abiogenesis. Life could still crop up often, or very infrequently — we don’t know which, because in either case we’d be here to ask the question. But what if we found a second planet in this imaginary universe, around another star that could potentially harbour life? Suppose too that we were able to test whether or not there was life on this other world. Whether the answer was positive or negative, we’d be able to significantly improve our knowledge about the frequency of abiogenesis. The more habitable planets we found in our pretend cosmos, the more we could test them, in order to build and refine a mathematical model for the likelihood of life. We live in a universe that allows us to get some measure of our own significance Now let’s bring ourselves back to this universe, and our galaxy, the one full of temperate planets — tens of billions of them. What do all these planets tell us? They don’t tell us how common life is, but they do give us a shot at finding out. If we lived in a cosmos with only a few planets, we could never deduce the true probability of abiogenesis with any precision, even if they all harboured life — as imagined by earlier advocates of plurality. We might never be lucky enough to find one of these worlds within examining range, and all would be lost among the stellar fields of the Milky Way. The existence of billions of planets gives us a chance to write the equation, a chance to pin down the relationship between habitability and actual habitation. And it could have larger implications still. Over the past century, scientists have noticed that many of the root physical characteristics of our universe, from the strength of gravity to the values of fundamental constants that determine atomic structure, all seem precisely aligned with the conditions that appear necessary for life to exist. Change a few of these constants and there’d be no stars like the Sun, there’d be no heavy elements like the carbon that builds our complex molecules, and so on. Of course, it’s pretty obvious that if things weren’t the way they are then we wouldn’t be here to notice it in the first place. In that sense, this apparent ‘fine-tuning’ of the Universe for life is just a selection effect, an unavoidable bias of our existing in the first place. This ‘fine-tuning’ of nature is especially mysterious if you suppose that ours is the only universe, the only version of reality that has ever existed. Why should a one-off universe be tuned for the grubby little molecular pieces of our life? It’s a thorny problem that can lead to all kinds of interesting, and sometimes wild, interpretations. But fundamental physics and cosmology might hold the answer to this mystery. Ever since the American philosopher and psychologist William James first coined the term ‘multiverse’ in 1895, there has been increasingly good reason to think that our universe is actually only one of many, part of a multiverse of other regions of space and time separated by distance and time, or dimension. Some theoretical frameworks suggest that there could be upwards of 10 to the power of 10 to the power of 16 distinct universes. Indeed, our investigations of the cosmos suggest that the same quantum physics of the vacuum that spawned our universe, and the awesome energy of inflating space that blows it up into the kind of place we occupy, is also well-suited to generating all of these other universes. We even have a shot at testing this through astrophysical measurements. If the theory holds, our fine-tuned universe will cease to be mysterious; it will simply be a part of a larger multiverse, a part that turned out suitable for life. The special thing about our planet-rich universe is that it’s tuned both for life and for finding out about life. If we were back in our imaginary universe with only one planetary system, we would have no way of learning how frequently life arises. We live in a universe that allows us to get some measure of our own significance. There is nothing in our present understanding of the nature of life or the Universe that says this was absolutely necessary, yet here it is. It’s not clear that we’ll ever be able to solve this puzzle. To start with, we need to build that equation for abiogenesis, we have to dig deeper in the astrophysical dirt to find places in the Universe that might harbour life, to follow nature’s breadcrumbs, as it were. The strategy is straightforward: seek out more worlds that might share some of Earth’s characteristics, and search their surfaces for the chemical signatures of life. It won’t be easy but, unlike 20 years ago, we now know we have a galaxy’s worth of planets to chase, and we know that if we persevere, the equation will eventually come into focus. That this work is even possible has, in a very real sense, already changed our universe. Not because it’s told us anything quantitatively new about life elsewhere, but rather because it’s raised the stakes for evaluating our significance, our cosmic loneliness, to a whole new level. Not only are the fundamental properties of our universe aligned, and tuned to, the needs of life, they also promise success in our quest to discover life’s frequency, origins, and perhaps the very causes of this tuning itself. And it didn’t have to be like this at all. If you turn it over in your mind enough times, you realise that Albert Einstein was right when he said: ‘The most incomprehensible thing about the Universe is that it is comprehensible.’ | Caleb Scharf | https://aeon.co//essays/the-real-meaning-of-the-exoplanet-revolution | |
Biology | Nobody expects atoms and molecules to have purposes, so why do we still think of living things in this way? | One of my favourite dinosaurs is the Stegosaurus, a monster from the late Jurassic (150 million years ago), noteworthy because of the diamond-like plates all the way down its back. Since this animal was discovered in the late 1870s in Wyoming, huge amounts of ink have been spilt trying to puzzle out the reason for the plates. The obvious explanation, that they are used for fighting or defence, simply cannot be true. The connection between the plates and the main body is way too fragile to function effectively in a battle to the death. Another explanation is that, like the stag’s antlers or the peacock’s tail, they play some sort of role in the mating game. Señor Stegosaurus with the best plates gets the harem and the other males have to do without. Unfortunately for this hypothesis, the females had the plates too, so that cannot be the explanation either. My favourite idea is that the plates were like the fins you find in electric-producing cooling towers: they were for heat transfer. In the cool of the morning, as the sun came up, they helped the animal to heat up quickly. In the middle of the day, especially when the vegetation consumed by the Stegosaurus was fermenting away in its belly, the plates would have helped to catch the wind and get rid of excess heat. A superb adaptation. (Sadly for me, no longer a favoured explanation, since latest investigations suggest that the plates may have been a way for individuals to recognise each other as members of the same species). But this essay is not concerned with dinosaurs themselves, rather with the kind of thinking biologists use when they wonder how dinosaur bodies worked. They are asking what was the purpose of the plates? What end did the plates serve? Were they for fighting? Were they for attracting mates? Were they for heat control? This kind of language is ‘teleological’ — from telos, the Greek for ‘end’. It is language about the purpose or goal of things, what Aristotle called their ‘final causes’, and it is something that the physical sciences have decisively rejected. There’s no sense for most scientists that a star is for anything, or that a molecule serves an end. But when we come to talk about living things, it seems very hard to shake off the idea that they have purposes and goals, which are served by the ways they have evolved. As I have written about before in Aeon, the chemist James Lovelock got into very hot water with his fellow scientists when he wanted to talk about the Earth being an organism (the Gaia hypothesis) and its parts having purposes: that sea lagoons were for evaporating unneeded salt out of the ocean, for instance. And as Steven Poole wrote in his essay ‘Your point is?’ in Aeon earlier this year, the contemporary philosopher Thomas Nagel is also in hot water since he suggested in his book Mind and Cosmos (2012) that we need to use teleological understanding to explain the nature of life and its evolution. Some have thought that this lingering teleological language is a sign that biology is not a real science at all, but just a collection of observations and facts. Others argue that the apparent purposefulness of nature leaves room for God. Immanuel Kant declared that you cannot do biology without thinking in terms of function, of final causes: ‘There will never be a Newton for a blade of grass,’ he claimed in Critique of Judgment (1790), meaning that living things are simply not determined by the laws of nature in the way that non-living things are, and we need the language of purpose in order to explain the organic world. Why do we still talk about organisms and their features in this way? Is biology basically different from the other sciences because living things do have purposes and ends? Or has biology simply failed to get rid of some old-fashioned, unscientific thinking — thinking that even leaves the door ajar for those who want to sneak God back into science? Biology’s entanglement with teleology reaches right back to the ancient Greek world. In Plato’s dialogue the Phaedo, Socrates describes himself as he sits awaiting his fate, and he asks whether this can be fully explained mechanically ‘because my body is made up of bones and muscles; and the bones… are hard and have joints which divide them, and the muscles are elastic, and they cover the bones’. All of this, says Socrates, is not ‘the true cause’ of why he sits where and how he does. The true cause is that ‘the Athenians have thought fit to condemn me and I have thought it better and more right to remain here and undergo my sentence’. Socrates describes this as a confusion of causes and conditions: he cannot sit without his bones and muscles being as they are, but this is no real explanation of why he sits thus. In the Timaeus Plato develops this further, describing a universe brought into being by a designer (what Plato called the Demiurge). An enquiry into the purpose of the bones and muscles was not only an enquiry into the ways of men, but ultimately an enquiry into the plans of the Demiurge. Now, however, the governing metaphors of nature changed. No longer did scientists think in terms of organisms: they thought in terms of machines Aristotle, Plato’s student, didn’t want God in the business of biology like this. He believed in a God, but not one that cared about the universe and its inhabitants. (Rather like some junior members of my family, this God spent Its time thinking mostly of Its own importance.) However, Aristotle was very interested in final causes, and argued that all living things contain forces that direct them towards their goal. These life forces operate in the here and now, yet in some sense they have the future in mind. They animate the acorn in order that it might turn into an oak, and likewise for other living things. Like Plato, Aristotle used the metaphor of design but unlike Plato he wanted to keep any supervisory, conscious intelligence out of the game. All of this came crashing down during the Scientific Revolution of the 16th and 17th centuries. For both Plato and Aristotle, the question of final causes had applied to physical phenomena — the stars, for example — as much as to biological phenomena. Both thought of objects as being rather like organisms. Why does the stone fall? Because being made of the element earth it wants to find its proper place, namely as close to the centre of the Earth as possible. It falls in order to achieve its right end: it wants to fall. Now, however, the governing metaphors of nature changed. No longer did scientists think in terms of organisms: they thought in terms of machines. The world, the universe, is like a gigantic clock. As the 17th-century French philosopher-scientist René Descartes insisted, the human body is nothing but an intricate machine. The heart is like a pump, and the arms and legs are a system of levers and pulleys and so forth. The 17th-century English chemist and philosopher Robert Boyle realised that as soon as you start to think in the mechanical fashion, then talking about ends and purposes really isn’t very helpful. A planet goes round and round the Sun; you want to know the mechanism by which it happens, not to imagine some higher purpose for it. In the same way, when you look at a clock you want to know what makes the hands go round the dial — you want the proximate causes. But surely machines have purposes just as much as organisms do? The clock exists in order to tell the time just as much as the eye exists in order to see. True, but as Boyle also saw, it is one thing to talk about intentions and purposes in a general, perhaps theological way, but another thing to do this as part of science. You can take the Platonic route and talk about God’s creative intentions for the universe, that’s fine. But, really, this is no longer part of science (if it ever was) and has little explanatory power. In the words of EJ Dijksterhuis, one of the great historians of the Scientific Revolution, God now became a ‘retired engineer’. On the other hand, if you wanted to take the Aristotelian approach and explain the growth and development of individual organisms by special vital forces, that was still theoretically possible. But since no one, as Boyle pointed out, seemed to have the slightest clue about these vital forces or what they did, he and his fellow mechanists just wanted to drop the idea altogether and get on with the job of finding proximate causes for all natural phenomena. The organic metaphor did not lead to new predictions and the other sorts of things one wants from science, especially technological promise. The machine metaphor did. Yet even Boyle realised that it is very hard to get rid of final-cause thinking when it comes to studying actual organisms, and not just using them as metaphors in the rest of the physical world. He was particularly interested in bats, and spent some considerable time discussing their adaptations — how their wings were so well-organised for flying and so on. In fact, almost paradoxically, in the 18th century the study of living things became more interested in teleology, even as the physical sciences were turning away from it. ‘Running fast in a herd while being as dumb as shit, I think, is a very good adaptation for survival’ The expansion of historical thinking played a key role here. History no longer seemed static and determined, and the belief that humans could make things better through their own unaided efforts meant that there was no longer a need to appeal to Providence for help. This secular ideal (or ideology) of progress put talk of ends and directional change very much in the air. If we as a society aim for certain ends, let us say an improved standard of living or education, could it be that history itself has ends too — ends that are not dictated so much by the Christian religion (judgment and salvation or condemnation) but that come as part of a general end-directed force or movement? Could life, and human history, be directed upward and forward from within? Alongside philosophers and historians such as Hegel, in the 19th century natural historians began to speculate about organisms in proto-evolutionary ways, and to talk of goals — usually, one admits, goals involving the arrival of the best of all possible organisms, namely Homo sapiens. Here is ‘The Temple of Nature’ (1802) by Erasmus Darwin, Charles Darwin’s physician grandfather: Organic Life beneath the shoreless wavesWas born and nurs’d in Ocean’s pearly caves;First forms minute, unseen by spheric glass,Move on the mud, or pierce the watery mass;These, as successive generations bloom,New powers acquire, and larger limbs assume;…The lordly Lion, monarch of the plain,The Eagle soaring in the realms of air,Whose eye undazzled drinks the solar glare,Imperious man, who rules the bestial crowd,Of language, reason, and reflection proud,With brow erect who scorns this earthy sod,And styles himself the image of his God;Arose from rudiments of form and sense,An embryon point, or microscopic ens!In the writings of some of the early evolutionists, notably the French biologist Jean-Baptiste Lamarck, we get a strong odour of Aristotelian vital forces pushing life up the ladder to the preordained destination of humankind. No longer was teleological language confined to the purpose of individual organisms and organs such as the hand or the acorn, but now it seemed to explain a general direction for the development of life itself. It was in this atmosphere of fascination for the history of life that Charles Darwin developed his theory of natural selection. Darwin’s On the Origin of Species (1859) was the watershed. He nailed the question of individual final causes, by explaining why organisms are so well-adapted to their environments. Teleological language was appropriate because such features as eyes and hands were not designed, but design-like. The eye is like a telescope, Stegosaur plates are like the fins you find in cooling towers. So we can ask about purposes. (Of course, questions about the dinosaur could not have been Darwin’s own: when the Origin was published, Stegosaurus still slumbered undiscovered in the rocks of the American West.) Natural selection explained how design-like features could arise, without a designer or a purpose. There need not be any final cause. There is a struggle for existence among organisms, or more precisely a struggle for reproduction. Some will survive and reproduce, and others will not. Because there are variations in populations and new variations always arriving, on average those surviving will be different from those not surviving, in ways that will have contributed to their greater success. Over time, this adds up to change in the direction of adaptation, of design-like features. No God is needed — even if he exists, he works at ‘arms-length’ — and neither are any vital forces. Just plain old laws working in a good mechanical fashion. The teleological metaphor was just a metaphor: underneath it lay quite simple mechanical explanations. So this cracked one side of the teleology problem: that of why individual organisms were well adapted to their environments. But what about the other side, the question of whether life itself had some overall direction, some overall sense of progress? What about the process that led to the development of humans? Darwin did believe in some kind of progress of this nature — what the Victorians called ‘monad to man’ — but he wanted nothing at all to do with Germanic, Hegelian kinds of world spirits taking life ever upwards. That smacked too much of a kind of pseudo-Christian faith, which he did not share. There was a Newton of the blade of grass and his name was Charles Darwin Characteristically, Darwin thrashed about on the matter of whether evolution had a direction. He agonised in his notebooks, and never really came up with a definitive answer. The closest he got was suggesting that improvement comes about naturally because each generation, on average, is going to be better than the previous one. Adaptations improve, and eventually brains appear, and get bigger and bigger. Hence humans. Darwin wrote: ‘If we look at the differentiation and specialisation of the several organs of each being when adult (and this will include the advancement of the brain for intellectual purposes) as the best standard of highness of organisation, natural selection clearly leads towards highness.’ What Darwin never really considered is the fact that brains are very expensive things to maintain, and big brains are not necessarily a one-way ticket to evolutionary success. In the immortal words of the late American paleontologist Jack Sepkoski: ‘I see intelligence as just one of a variety of adaptations among tetrapods for survival. Running fast in a herd while being as dumb as shit, I think, is a very good adaptation for survival.’ Darwin might have solved the teleological problem in biology once and for all, but his solution was not an immediate success. Most people really could not get their heads around natural selection, and frankly most people were not troubled by the question of whether the evolution of life had an end point. Obviously humans were it, and were bound to appear. All sorts of neo-Platonists were happy to believe a Christian interpretation of Darwin’s view of life: God set evolution going in order to ascend to Man. They could have Jesus and evolution too! In the words of Henry Ward Beecher — the charismatic preacher, prolific adulterer, and brother of Harriet Beecher Stowe — ‘Who designed this mighty machine, created matter, gave to it its laws, and impressed upon it that tendency which has brought forth the almost infinite results on the globe, and wrought them into a perfect system? Design by wholesale is grander than design by retail.’ While Christians could interpret evolution in a Platonic frame, as the working out of a Divine creator’s purpose, some biologists revived Aristotle’s idea of vital forces that impelled living things towards their ends. At the turn of the 20th century, the German embryologist Hans Driesch described such forces that he called ‘entelechies’, which he described as being ‘mind-like’. In France, the philosopher Henri Bergson supposed ‘élan vital’, a vital spirit that created adaptations and that gave evolution its upwards course. In England, the biologist Julian Huxley — the grandson of Darwin’s great supporter Thomas Henry Huxley and the older brother of the novelist Aldous Huxley — was always drawn to vitalism, seeing in evolution a kind of substitute for Christianity which provided people with a sense of meaning and direction: what he called ‘religion without revelation’. But even he could see that, scientifically, vitalism was a non-starter. The problem was not that no one could see these forces: no one could see electrons either. Rather it was that they didn’t provide any new explanations or predictions. They seemed to do no real work in the physical world, and mainstream biology rejected them as a hangover from an earlier age. So what of now? Today’s scientists are pretty certain that the problem of teleology at the individual organism level has been licked. Darwin really was right. Natural selection explains the design-like nature of organisms and their characteristics, without any need to talk about final causes. On the other hand, no natural selection lies behind mountains and rivers and whole planets. They are not design-like. That is why teleological talk is inappropriate, and why the Gaia hypothesis is so criticised. And overall that is why biology is just as good a science as physics and chemistry. It is dealing with different kinds of phenomena and so different kinds of explanation are appropriate. There was a Newton of the blade of grass and his name was Charles Darwin. But historical teleology — the question of whether evolution itself takes a direction, in particular a progressive one, is a trickier problem, and I cannot say that there is yet, nor the prospect of there ever being, a satisfactory answer. One popular way to explain the apparent progress in evolution is as a biological arms race (a metaphor coined by Julian Huxley, incidentally). Through natural selection, prey animals get faster and so in tandem do predators. Perhaps, as in military arms races, eventually electronics and computers get ever more important, and the winners are those who do best in this respect. The British evolutionary biologist Richard Dawkins has argued that humans have the biggest on-board computers and that is what we expect natural selection to produce. But it is not obvious that arms races would result in humans — those physically feeble and mentally able omnivorous primates. Nor that lines of prey and predator evolve in tandem more generally. I’ll offer no final answers here, but one final question. Could a full-blown teleology, of the more scientific Aristotelian kind, reappear, complete with vital forces? There’s no logical reason to say this is impossible, and that is why I think it is legitimate for Nagel to raise the possibility. Two hundred years ago, people would have laughed at the idea of quantum mechanics, with all its violations of common-sense thinking. But there is a big difference: quantum mechanics was invented because it filled a big explanatory gap. This is Nagel’s big mistake: his argument for returning to the idea of purposes and goals in biology is not based on an extensive engagement with the science, but a philosophical skim across the surface. Quantum mechanics is weird, but it works. There is nothing in the idea of final causes to encourage such wishful thinking. So what’s a Stegosaur for? We can ask what adaptive function the plates on its back served, as good Darwinian scientists. But the beast itself? It’s not for anything, it just is — in all its decorative, mysterious, plant-munching glory. | Michael Ruse | https://aeon.co//essays/what-s-a-stegosaur-for-why-life-is-design-like | |
Art | One emotion inspired our greatest achievements in science, art and religion. We can manipulate it – but why do we have it? | When I was growing up in New York City, a high point of my calendar was the annual arrival of the Ringling Brothers and Barnum & Bailey Circus — ‘the greatest show on earth’. My parents endured the green-haired clowns, sequinned acrobats and festooned elephants as a kind of garish pageantry. For me, though, it was a spectacular interruption of humdrum reality – a world of wonder, in that trite but telling phrase. Wonder is sometimes said to be a childish emotion, one that we grow out of. But that is surely wrong. As adults, we might experience it when gaping at grand vistas. I was dumbstruck when I first saw a sunset over the Serengeti. We also experience wonder when we discover extraordinary facts. I was enthralled to learn that, when arranged in a line, the neurons in a human brain would stretch the 700 miles from London to Berlin. But why? What purpose could this wide-eyed, slack-jawed feeling serve? It’s difficult to determine the biological function of any affect, but whatever it evolved for (and I’ll come to that), wonder might be humanity’s most important emotion. First, let’s be clear what we’re talking about. My favourite definition of wonder comes from the 18th-century Scottish moral philosopher Adam Smith, better known for first articulating the tenets of capitalism. He wrote that wonder arises ‘when something quite new and singular is presented… [and] memory cannot, from all its stores, cast up any image that nearly resembles this strange appearance’. Smith associated this quality of experience with a distinctive bodily feeling — ‘that staring, and sometimes that rolling of the eyes, that suspension of the breath, and that swelling of the heart’. These bodily symptoms point to three dimensions that might in fact be essential components of wonder. The first is sensory: wondrous things engage our senses — we stare and widen our eyes. The second is cognitive: such things are perplexing because we cannot rely on past experience to comprehend them. This leads to a suspension of breath, akin to the freezing response that kicks in when we are startled: we gasp and say ‘Wow!’ Finally, wonder has a dimension that can be described as spiritual: we look upwards in veneration; hence Smith’s invocation of the swelling heart. English contains many words related to this multifarious emotion. At the mild end of the spectrum, we talk about things being marvellous. More intense episodes might be described as stunning or astonishing. At the extreme, we find experiences of awe and the sublime. These terms seem to refer to the same affect at different levels of intensity, just as anger progresses from mild irritation to violent fury, and sadness ranges from wistfulness to abject despair. Smith’s analysis appears in his History of Astronomy (1795). In that underappreciated work, he proposed that wonder is crucial for science. Astronomers, for instance, are moved by it to investigate the night sky. He might have picked up this idea from the French philosopher René Descartes, who in his Discourse on the Method (1637) described wonder as the emotion that motivates scientists to investigate rainbows and other strange phenomena. In a similar spirit, Socrates said that philosophy begins in wonder: that wonder is what leads us to try to understand our world. In our own time, Richard Dawkins has portrayed wonder as a wellspring from which scientific inquiry begins. Animals simply act, seeking satiation, safety and sex. Humans reflect, seeking comprehension. For a less flattering view, we turn to the 17th-century English philosopher Francis Bacon, the father of the scientific method. He called wonder ‘broken knowledge’ — a mystified incomprehension that science alone could cure. But this mischaracterises science and wonder alike. Scientists are spurred on by wonder, and they also produce wondrous theories. The paradoxes of quantum theory, the efficiency of the genome: these are spectacular. Knowledge does not abolish wonder; indeed, scientific discoveries are often more wondrous than the mysteries they unravel. Without science, we are stuck with the drab world of appearances. With it, we discover endless depths, more astounding that we could have imagined. In this respect, science shares much with religion. Gods and monsters are wondrous things, recruited to explain life’s unknowns. Also, like science, religion has a striking capacity to make us feel simultaneously insignificant and elevated. Dacher Keltner, professor of psychology at the University of California, Berkeley, has found that awe, an intense form of wonder, makes people feel physically smaller than they are. It is no accident that places of worship often exaggerate these feelings. Temples have grand, looming columns, dazzling stained glass windows, vaulting ceilings, and intricately decorated surfaces. Rituals use song, dance, smell, and elaborate costumes to engage our senses in ways that are bewildering, overwhelming, and transcendent. Wonder, then, unites science and religion, two of the greatest human institutions. Let’s bring in a third. Religion is the first context in which we find art. The Venus of Willendorf appears to be an idol, and animals on the walls of the Chauvet, Altamira and Lascaux caves are thought to have been used in shamanic rites, with participants travelling to imaginative netherworlds in trance-like states under the hypnotic flicker of torchlight. Up through the Renaissance, art primarily appeared in churches. When in the Middle Ages Giotto broke free from the constraints of Gothic painting, he did not produce secular art but a deeply spiritual vision, rendering divine personages more accessible by showing them in fleshy verisimilitude. His Scrovegni Chapel in Padua is like a jewel-box, exploding with figures who breathe, battle, weep, writhe, and rise from the dead to meet their God beneath an ethereal cobalt canopy. It is, in short, a wonder. When art officially parted company from religion in the 18th century, some links remained. Artists began to be described as ‘creative’ individuals, whereas the power of creation had formerly been reserved for God alone. With the rise of the signature, artists could obtain cultlike status. A signature showed that this was no longer the product of an anonymous craftsman, and drew attention to the occult powers of the maker, who converted humble oils and pigments into objects of captivating beauty, and brought imaginary worlds to life. The cult of the signature is a recent phenomenon and yet, by promoting reverence for artists, it preserves an old link between beauty and sanctity. Art, science and religion are all forms of excess; they transcend the practical ends of daily life Art museums are a recent invention, too. During the Middle Ages, artworks appeared almost exclusively in religious contexts. After that, they began cropping up in private collections, called cabinets of curiosity (Wunderkammern, in German). These collections intermingled paintings and sculptures with other items deemed marvellous or miraculous: animal specimens, fossils, shells, feathers, exotic weapons, decorative books. Art was continuous with science — a human practice whose products could be compared to oddities found in the natural world. This spirit dominated into the 19th century. The early acquisitions of the British Museum included everything from animal bones to Italian paintings. In a compendious book called The World of Wonders: A Record of Things Wonderful in Nature, Science, and Art (1883) we find entries on electric eels, luminous plants, volcanic eruptions, comets, salt mines, the Dead Sea, and dinosaur bones, casually interspersed with entries on Venetian glass, New Zealand wood carvings, and the tomb of Mausolus. The founder of the circus that I used to attend was the showman and charlatan P T Barnum, who took over the American Museum in New York in 1841. There he displayed portraits of famous personages, wax statues, and a scale model of Niagara Falls, at the same time introducing enthralled crowds to the ‘Siamese’ twins Chang and Eng Bunker, and a little person dubbed General Tom Thumb. The museum was advertised on luminous posters proclaiming ‘the greatest show on Earth’ — the same show that he would eventually take on the road with his travelling circus. Today, the link between circuses and museums might be hard to fathom, but at the time the connection would have seemed quite natural. As temples of wonder, museums were showcases for oddities: a fine portrait, a waxwork tableau and a biological aberration all had their place. By the end of the century, however, science and art had parted company. Major cities began opening dedicated art museums, places where people could come to view paintings without the distraction of butterfly wings, bearded ladies and deformed animal foetuses in jars. Nowadays, we don’t think of museums as houses of curiosity, but they remain places of wonder. They are shrines for art, where we go to be amazed. Atheist that I am, it took some time for me to realise that I am a spiritual person. I regularly go to museums to stand in mute reverence before the artworks that I admire. Recently, I have been conducting psychological studies with Angelika Seidel, my collaborator at the City University of New York (CUNY), to explore this kind of emotional spell. We told test subjects to imagine that the Mona Lisa was destroyed in a fire, but that there happened to be a perfect copy that even experts couldn’t tell from the original. If they could see just one or the other, would they rather see the ashes of the original Mona Lisa or a perfect duplicate? Eighty per cent of our respondents chose the ashes: apparently we disvalue copies and attribute almost magical significance to originals. In another study, we hung reproductions of paintings on a wall and told test subjects either that they were works by famous artists or that they were forgeries. The very same paintings appeared physically larger when attributed to famous artists. We also found that pictures look better and more wondrous when they are placed high on a wall: when we have to look up at an artwork, it impresses us more. In the mid-18th century, the philosopher Edmund Burke hypothesised a connection between aesthetics and fear. In a similar vein, the poet Rainer Maria Rilke proclaimed: ‘beauty is nothing but the beginning of terror’. To put this association to the test, I, together with Kendall Eskine and Natalie Kacinik, psychologists at CUNY, recently conducted another experiment. First, we scared a subset of our respondents by showing them a startling film in which a zombie jumps out on a seemingly peaceful country road. Then we asked all of our subjects to evaluate some abstract, geometric paintings by El Lissitzky. Those subjects who had been startled found the paintings more stirring, inspiring, interesting, and moving. This link between art and fear relates to the spiritual dimension of wonder. Just as people report fear of God, great art can be overwhelming. It stops us in our tracks and demands worshipful attention. Bringing these threads together, we can see that science, religion and art are unified in wonder. Each engages our senses, elicits curiosity and instils reverence. Without wonder, it is hard to believe that we would engage in these distinctively human pursuits. Robert Fuller, professor of religious studies at Bradley University in Illinois, contends that it is ‘one of the principal human experiences that lead to belief in an unseen order’. In science, that invisible order might include microorganisms and the invisible laws of nature. In religion, we find supernatural powers and divine agents. Artists invent new ways of seeing that give us a fresh perspective on the world we inhabit. Art, science and religion appear to be uniquely human institutions. This suggests that wonder has a bearing on human uniqueness as such, which in turn raises questions about its origins. Did wonder evolve? Are we the only creatures who experience it? Descartes claimed that it was innate in human beings; in fact, he called it our most fundamental emotion. The pioneering environmentalist Rachel Carson also posited an inborn sense of wonder, one especially prevalent in children. An alternative possibility is that wonder is a natural by-product of more basic capacities, such as sensory attention, curiosity and respect, the last of which is crucial in social status hierarchies. Extraordinary things trigger all three of these responses at once, evoking the state we call wonder. Other animals can experience it, too. The primatologist Jane Goodall was observing her chimpanzees in Gombe when she noticed a male chimp gesturing excitedly at a beautiful waterfall. He perched on a nearby rock and gaped at the flowing torrents of water for a good 10 minutes. Goodall and her team saw such responses on several occasions. She concluded that chimps have a sense of wonder, even speculating about a nascent form of spirituality in our simian cousins. This leaves us with a puzzle. If wonder is found in all human beings and higher primates, why do science, art and religion appear to be recent developments in the history of our species? Anatomically modern humans have been around for 200,000 years, yet the earliest evidence for religious rituals appears about 70,000 years ago, in the Kalahari Desert, and the oldest cave paintings (at El Castillo in Spain) are only 40,000 years old. Science as we know it is much younger than that — perhaps only a few hundred years old. It is also noteworthy that these endeavours are not essential for survival, which means they probably aren’t direct products of natural selection. Art, science and religion are all forms of excess; they transcend the practical ends of daily life. Perhaps evolution never selected for wonder itself. wonder is the accidental impetus behind our greatest achievements And if wonder is shared beyond our own species, why don’t we find apes carpooling to church each Sunday? The answer is that the emotion alone is not sufficient. It imbues us with the sense of the extraordinary, but it takes considerable intellectual prowess and creativity to cope with extraordinary things by devising origin myths, conducting experiments and crafting artistic representations. Apes rarely innovate; their wonder is a dead-end street. So it was for our ancestors. For most of our history, humans travelled in small groups in constant search for subsistence, which left little opportunity to devise theories or create artworks. As we gained more control over our environment, resources increased, leading to larger group sizes, more permanent dwellings, leisure time, and a division of labour. Only then could wonder bear its fruit. Art, science and religion reflect the cultural maturation of our species. Children at the circus are content to ogle at a spectacle. Adults might tire of it, craving wonders that are more profound, fertile, illuminating. For the mature mind, wondrous experience can be used to inspire a painting, a myth or a scientific hypothesis. These things take patience, and an audience equally eager to move beyond the initial state of bewilderment. The late arrival of the most human institutions suggests that our species took some time to reach this stage. We needed to master our environment enough to exceed the basic necessities of survival before we could make use of wonder. If this story is right, wonder did not evolve for any purpose. It is, rather, a by-product of natural inclinations, and its great human derivatives are not inevitable. But wonder is the accidental impetus behind our greatest achievements. Art, science and religion are inventions for feeding the appetite that wonder excites in us. They also become sources of wonder in their own right, generating epicycles of boundless creativity and enduring inquiry. Each of these institutions allows us to transcend our animality by transporting us to hidden worlds. In harvesting the fruits of wonder, we came into our own as a species. | Jesse Prinz | https://aeon.co//essays/why-wonder-is-the-most-human-of-all-emotions | |
Demography and migration | ‘You need to fill that in, love,’ they told me at the jobcentre. Would my communist work experience count in the UK? | I came to the UK from Bulgaria in 1990, and every job I saw advertised wanted experience. The application forms had box after box for me to list my previous jobs. But all I had ever done was go to school. I tried leaving those pages empty. The forms were handed back. ‘You need to fill that in, love,’ they told me at the jobcentre. But I’m 19. I have nothing to fill it in with yet. ‘Go to a careers adviser. They’ll sort you out.’ So, I went, although I did not need careers advice. I knew what kind of career I wanted and they didn’t advertise for it at the jobcentre. Journalism was in my blood. Back home, my father was in the process of founding a new daily broadsheet. My mother worked for the radio, for Bulgaria’s world service. I could speak three languages and had left an English degree course to follow my heart to the UK. Now my husband and I lived in Devon, in a cold, rented house a mile from the nearest bus stop. We had no money, but I wouldn’t dream of claiming benefits. My career had simply been stalled by circumstance. As soon as I had my circumstances in order, everything would be back on track. In the meantime, I needed a job. Yet it seemed impossible — I couldn’t get a job until I’d had a job. How did people ever break that cycle? How did they find something to put into those blank boxes? All I had done so far was go to school and get married. ‘Did you not have a Saturday job or something?’ asked the careers woman kindly. ‘Just think, anything at all, just so you can put it on the application form.’ I looked at her and saw myself reflected in her horn-rimmed glasses. My hair too long, my jeans too marbled, my accent unfamiliar. The Berlin Wall had come down the year before. It would be a few more years before Britain looked east for plumbers and nannies, and English ears became accustomed to our steely consonants. ‘Anything?’ I asked in confusion. ‘Yes, anything at all — did you babysit for a neighbour, or sell lemonade from a stall in your garden, or help at a charity event?’ I looked at her through my own horn-rimmed glasses. I looked at her dangly earrings, her chunky necklace. Back home in Bulgaria, no one ever used babysitters. That was a society of total employment: everyone worked, not just dads but mums, too. So we were all latchkey kids, without the stigma. But then, the retirement age was low — no older than 55 for women — which meant there were always plenty of ‘grannies’ about. Random old ladies, sitting on benches outside your block of flats, making sure that your skirt wasn’t too short and that you didn’t chew gum while you spoke. Who needed a babysitter with them around? As for selling lemonade, I wanted to, honest to God. There was something I needed money for, I forget what now, but I desired it with a crazed intensity. I even wrote up a sign, but it wasn’t for lemonade, it was for ayryan, a watered-down yogurt drink. When my parents saw my advertisement, they looked at each other and took a deep breath. They carefully praised my initiative and then folded the sign away. This was socialist Bulgaria. We might get into trouble. Private enterprise was not compatible with ideological reality. As for charity events: there were no charity events. The whole country was a charity case, its entire economy built on a warped charity model — that you give what you can and take what you need — with the crucial difference that people were coerced into it. For all their big foreheads, Marx and Lenin never gave much thought to the base human desire to feel better than others, to have shinier things. Even when you make it impossible for people to compete openly, they still want to compete. They scurry and scamper and do what they can to make better versions of themselves. And if they glimpse someone less fortunate, they don’t feel pity — they gloat. In this atmosphere, charity didn’t stand a chance. It would be many years before this peculiar Western concept met with anything other than incredulity in Bulgaria. ‘But what about at school?’ the nice careers lady persevered. ‘Did you not do any work experience?’ At last, a light bulb went ping. I could list my praktika, my work experience! When she’d said ‘work’, I’d immediately thought ‘paid’. I was in the West, after all, where money was king, and I had yet to earn any myself. But if it was experience they were after — well, I had plenty. It was a snug, cosy, book-lined den and, at 15 years old, I was thrilled to wear a regulation blue coat over my own clothes. Books were very precious; those who handled them had to be properly attired. I was like a lab-coated assistant in the laboratory of the intellect. I’d stand guard at the counter. The door would open, the bell would ring, and I would spring to attention. The customer would point at the shelf behind me, and I would bring the book down gently and then watch like a hawk as its jacket was examined, front and back, and its weight measured with a bounce of the hand. Then a few pages would turn. Some books arrived half-baked, with folios still uncut, their pages tenting together. I would get a knife and slice them apart, like a surgeon. Once it was agreed that this was indeed a book, a good book, that it felt right in the hand, and — look! — its pages came apart obligingly, payment would be offered. I’d put the money in the till. I’d lay the book flat on the stack of brown paper on the counter. And then, the best bit (we’d even had tutorials on how to do this): I would wrap it snugly in the paper, tucking in the corners and sealing the package with tape from a dispenser. All this had given me enormous pleasure. When the week’s praktika was over, I begged the bookshop lady to let me come again. ‘It could be our secret!’ I pleaded. ‘You could go for a coffee break. Just let me look after the shop.’ She smiled sadly and shook her head. It was not allowed. I was too young to work. I could only do work experience. That was in the spring, when the lime trees were in blossom. By the time the grass in the city parks was turning yellow, I had been placed in a factory near the centre of town. My classmates and I wore white coats and large protective goggles, which made my face sweat and my eyes squint. The summer sun spilt in through the giant plate-glass windows, dazzling us. I cannot tell you now what we did. I remember small plastic objects, discs, rectangles and squares, and wires, and pearly bulbs that were pleasant to touch. We had to do something with these disparate parts, but what? Surely they didn’t leave a bunch of teenagers to assemble electrical goods? Whatever we did, we didn’t do it for long. Perhaps we did it so badly that they sent us home early. We weren’t there to work anyway; not really. The experience of work was enough. And for me, it was: I’d seen the blank faces of the people in the factory, I’d felt their ennui. If this was work, I wanted nothing more to do with it. By autumn, my class was off again, this time to a canning plant near the Danube where we would process the grapes and the tomatoes of the season’s harvest, preserving them for our nation to consume throughout the winter. This was more like it — real work, noble work. We had an important job to do, and we’d get sweaty and dirty doing it. After four weeks’ hard labour, far from home and holed up in grotty communal accommodation, I’d made enough money to buy a bottle of flowery perfume. Pocket money, certainly not the rate for the job. The fact that I spent it on something so frivolous says it all: this was just pretend. This was not work, the real thing, the career I had imagined for myself in the future, that would occupy my mind and satisfy my soul and feed my family. And here I was now, in England, my career derailed, my circumstances temporary. If experiences would be enough to get a job, then I had them. I told the kind careers lady about the bookshop, and about the electronics factory, and about the canning plant. Her eyes shone. ‘See, you have worked!’ she said, clapping her hands under her chin. ‘You really have worked.’ Then her head tilted to one side. ‘You poor thing,’ she muttered. She looked through me, I suppose, to posters of beefy women in overalls waving hammers and sickles. And I looked at her and imagined other teenagers in my seat. Local teenagers, with droopy hair and rock bands on their T-shirts, who might think that washing the family car or getting the neighbour’s shopping in was a job, a job for which they must get paid. I imagined them seeing the blanks on an application form and filling them in, confidently, their domestic experiences standing in for the real thing. And I imagined the recruiters, the managers and that whole as-yet-unmet boss-class of people reading those application forms and nodding at all the blank boxes neatly filled in, happy to buy the fiction of the pretend employment. The old joke from the communist days goes: ‘We pretend to work, and they pretend to pay us.’ Here it was different: you pretend you’ve worked, and they pretend to believe you. Truly, this was the land of opportunity. If I could pretend that my praktika had been real work, why not go one step further: why not pretend I had done something worth pretending? Instead of factories and shops, why not galleries and museums, universities and hospitals? People in the UK were still too timid to peek behind the Iron Curtain. I could have fun filling in the blanks. | Elena Seymenliyska | https://aeon.co//essays/how-i-learned-to-work-in-communist-bulgaria | |
Biology | As the American people got fatter, so did marmosets, vervet monkeys and mice. The problem may be bigger than any of us | Years ago, after a plane trip spent reading Fyodor Dostoyevsky’s Notes from the Underground and Weight Watchers magazine, Woody Allen melded the two experiences into a single essay. ‘I am fat,’ it began. ‘I am disgustingly fat. I am the fattest human I know. I have nothing but excess poundage all over my body. My fingers are fat. My wrists are fat. My eyes are fat. (Can you imagine fat eyes?).’ It was 1968, when most of the world’s people were more or less ‘height-weight proportional’ and millions of the rest were starving. Weight Watchers was a new organisation for an exotic new problem. The notion that being fat could spur Russian-novel anguish was good for a laugh. That, as we used to say during my Californian adolescence, was then. Now, 1968’s joke has become 2013’s truism. For the first time in human history, overweight people outnumber the underfed, and obesity is widespread in wealthy and poor nations alike. The diseases that obesity makes more likely — diabetes, heart ailments, strokes, kidney failure — are rising fast across the world, and the World Health Organisation predicts that they will be the leading causes of death in all countries, even the poorest, within a couple of years. What’s more, the long-term illnesses of the overweight are far more expensive to treat than the infections and accidents for which modern health systems were designed. Obesity threatens individuals with long twilight years of sickness, and health-care systems with bankruptcy. And so the authorities tell us, ever more loudly, that we are fat — disgustingly, world-threateningly fat. We must take ourselves in hand and address our weakness. After all, it’s obvious who is to blame for this frightening global blanket of lipids: it’s us, choosing over and over again, billions of times a day, to eat too much and exercise too little. What else could it be? If you’re overweight, it must be because you are not saying no to sweets and fast food and fried potatoes. It’s because you take elevators and cars and golf carts where your forebears nobly strained their thighs and calves. How could you do this to yourself, and to society? Moral panic about the depravity of the heavy has seeped into many aspects of life, confusing even the erudite. Earlier this month, for example, the American evolutionary psychologist Geoffrey Miller expressed the zeitgeist in this tweet: ‘Dear obese PhD applicants: if you don’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation. #truth.’ Businesses are moving to profit on the supposed weaknesses of their customers. Meanwhile, governments no longer presume that their citizens know what they are doing when they take up a menu or a shopping cart. Yesterday’s fringe notions are becoming today’s rules for living — such as New York City’s recent attempt to ban large-size cups for sugary soft drinks, or Denmark’s short-lived tax surcharge on foods that contain more than 2.3 per cent saturated fat, or Samoa Air’s 2013 ticket policy, in which a passenger’s fare is based on his weight because: ‘You are the master of your air ‘fair’, you decide how much (or how little) your ticket will cost.’ Several governments now sponsor jauntily named pro-exercise programmes such as Let’s Move! (US), Change4Life (UK) and actionsanté (Switzerland). Less chummy approaches are spreading, too. Since 2008, Japanese law requires companies to measure and report the waist circumference of all employees between the ages of 40 and 74 so that, among other things, anyone over the recommended girth can receive an email of admonition and advice. Hand-in-glove with the authorities that promote self-scrutiny are the businesses that sell it, in the form of weight-loss foods, medicines, services, surgeries and new technologies. A Hong Kong company named Hapilabs offers an electronic fork that tracks how many bites you take per minute in order to prevent hasty eating: shovel food in too fast and it vibrates to alert you. A report by the consulting firm McKinsey & Co predicted in May 2012 that ‘health and wellness’ would soon become a trillion-dollar global industry. ‘Obesity is expensive in terms of health-care costs,’ it said before adding, with a consultantly chuckle, ‘dealing with it is also a big, fat market.’ And so we appear to have a public consensus that excess body weight (defined as a Body Mass Index of 25 or above) and obesity (BMI of 30 or above) are consequences of individual choice. It is undoubtedly true that societies are spending vast amounts of time and money on this idea. It is also true that the masters of the universe in business and government seem attracted to it, perhaps because stern self-discipline is how many of them attained their status. What we don’t know is whether the theory is actually correct. Higher levels of female obesity correlated with higher levels of gender inequality in each nation Of course, that’s not the impression you will get from the admonishments of public-health agencies and wellness businesses. They are quick to assure us that ‘science says’ obesity is caused by individual choices about food and exercise. As the Mayor of New York, Michael Bloomberg, recently put it, defending his proposed ban on large cups for sugary drinks: ‘If you want to lose weight, don’t eat. This is not medicine, it’s thermodynamics. If you take in more than you use, you store it.’ (Got that? It’s not complicated medicine, it’s simple physics, the most sciencey science of all.) Yet the scientists who study the biochemistry of fat and the epidemiologists who track weight trends are not nearly as unanimous as Bloomberg makes out. In fact, many researchers believe that personal gluttony and laziness cannot be the entire explanation for humanity’s global weight gain. Which means, of course, that they think at least some of the official focus on personal conduct is a waste of time and money. As Richard L Atkinson, Emeritus Professor of Medicine and Nutritional Sciences at the University of Wisconsin and editor of the International Journal of Obesity, put it in 2005: ‘The previous belief of many lay people and health professionals that obesity is simply the result of a lack of willpower and an inability to discipline eating habits is no longer defensible.’ Consider, for example, this troublesome fact, reported in 2010 by the biostatistician David B Allison and his co-authors at the University of Alabama in Birmingham: over the past 20 years or more, as the American people were getting fatter, so were America’s marmosets. As were laboratory macaques, chimpanzees, vervet monkeys and mice, as well as domestic dogs, domestic cats, and domestic and feral rats from both rural and urban areas. In fact, the researchers examined records on those eight species and found that average weight for every one had increased. The marmosets gained an average of nine per cent per decade. Lab mice gained about 11 per cent per decade. Chimps, for some reason, are doing especially badly: their average body weight had risen 35 per cent per decade. Allison, who had been hearing about an unexplained rise in the average weight of lab animals, was nonetheless surprised by the consistency across so many species. ‘Virtually in every population of animals we looked at, that met our criteria, there was the same upward trend,’ he told me. It isn’t hard to imagine that people who are eating more themselves are giving more to their spoiled pets, or leaving sweeter, fattier garbage for street cats and rodents. But such results don’t explain why the weight gain is also occurring in species that human beings don’t pamper, such as animals in labs, whose diets are strictly controlled. In fact, lab animals’ lives are so precisely watched and measured that the researchers can rule out accidental human influence: records show those creatures gained weight over decades without any significant change in their diet or activities. Obviously, if animals are getting heavier along with us, it can’t just be that they’re eating more Snickers bars and driving to work most days. On the contrary, the trend suggests some widely shared cause, beyond the control of individuals, which is contributing to obesity across many species. Such a global hidden factor (or factors) might help to explain why most people gain weight gradually, over decades, in seeming contradiction of Bloomberg’s thermodynamics. This slow increase in fat stores would suggest that they are eating only a tiny bit more each month than they use in fuel. But if that were so, as Jonathan C K Wells, professor of child nutrition at University College London, has pointed out, it would be easy to lose weight. One recent model estimated that eating a mere 30 calories a day more than you use is enough to lead to serious weight gain. Given what each person consumes in a day (1,500 to 2,000 calories in poorer nations; 2,500 to 4,000 in wealthy ones), 30 calories is a trivial amount: by my calculations, that’s just two or three peanut M&Ms. If eliminating that little from the daily diet were enough to prevent weight gain, then people should have no trouble losing a few pounds. Instead, as we know, they find it extremely hard. Many other aspects of the worldwide weight gain are also difficult to square with the ‘it’s-just-thermodynamics’ model. In rich nations, obesity is more prevalent in people with less money, education and status. Even in some poor countries, according to a survey published last year in the International Journal of Obesity, increases in weight over time have been concentrated among the least well-off. And the extra weight is unevenly distributed among the sexes, too. In a study published in the Social Science and Medicine journal last year, Wells and his co-authors found that, in a sample that spanned 68 nations, for every two obese men there were three obese women. Moreover, the researchers found that higher levels of female obesity correlated with higher levels of gender inequality in each nation. Why, if body weight is a matter of individual decisions about what to eat, should it be affected by differences in wealth or by relations between the sexes? Chemicals ingested on Tuesday might promote more fat retention on Wednesday To make sense of all this, the purely thermodynamic model must appeal to complicated indirect effects. The story might go like this: being poor is stressful, and stress makes you eat, and the cheapest food available is the stuff with a lot of ‘empty calories’, therefore poorer people are fatter than the better-off. These wheels-within-wheels are required because the mantra of the thermodynamic model is that ‘a calorie is a calorie is a calorie’: who you are and what you eat are irrelevant to whether you will add fat to your frame. The badness of a ‘bad’ food such as a Cheeto is that it makes calorie intake easier than it would be with broccoli or an apple. Yet a number of researchers have come to believe, as Wells himself wrote earlier this year in the European Journal of Clinical Nutrition, that ‘all calories are not equal’. The problem with diets that are heavy in meat, fat or sugar is not solely that they pack a lot of calories into food; it is that they alter the biochemistry of fat storage and fat expenditure, tilting the body’s system in favour of fat storage. Wells notes, for example, that sugar, trans-fats and alcohol have all been linked to changes in ‘insulin signalling’, which affects how the body processes carbohydrates. This might sound like a merely technical distinction. In fact, it’s a paradigm shift: if the problem isn’t the number of calories but rather biochemical influences on the body’s fat-making and fat-storage processes, then sheer quantity of food or drink are not the all-controlling determinants of weight gain. If candy’s chemistry tilts you toward fat, then the fact that you eat it at all may be as important as the amount of it you consume. More importantly, ‘things that alter the body’s fat metabolism’ is a much wider category than food. Sleeplessness and stress, for instance, have been linked to disturbances in the effects of leptin, the hormone that tells the brain that the body has had enough to eat. What other factors might be at work? Viruses, bacteria and industrial chemicals have all entered the sights of obesity research. So have such aspects of modern life as electric light, heat and air conditioning. All of these have been proposed, with some evidence, as direct causes of weight gain: the line of reasoning is not that stress causes you to eat more, but rather that it causes you to gain weight by directly altering the activities of your cells. If some or all of these factors are indeed contributing to the worldwide fattening trend, then the thermodynamic model is wrong. We are, of course, surrounded by industrial chemicals. According to Frederick vom Saal, professor of biological sciences at the University of Missouri, an organic compound called bisphenol-A (or BPA) that is used in many household plastics has the property of altering fat regulation in lab animals. And a recent study by Leonardo Trasande and colleagues at the New York University School of Medicine with a sample size of 2,838 American children and teens found that, for the majority, those with the highest levels of BPA in their urine were five times more likely to be obese than were those with the lowest levels. BPA has been used so widely — in everything from children’s sippy cups to the aluminium in fizzy drink cans — that almost all residents of developed nations have traces of it in their pee. This is not to say that BPA is unique. In any developed or developing nation there are many compounds in the food chain that seem, at the very least, to be worth studying as possible ‘obesogens’ helping to tip the body’s metabolism towards obesity. For example, a study by the Environmental Working Group of the umbilical cords of 10 babies born in US hospitals in 2004 found 287 different industrial chemicals in their blood. Beatrice Golomb, professor of medicine at the University of California, San Diego, has proposed a long list of candidates — all chemicals that, she has written, disrupt the normal process of energy storage and use in cells. Her suspects include heavy metals in the food supply, chemicals in sunscreens, cleaning products, detergents, cosmetics and the fire retardants that infuse bedclothes and pyjamas. Chemicals and metals might promote obesity in the short term by altering the way that energy is made and stored within cells, or by changing the signals in the fat-storage process so that the body makes more fat cells, or larger fat cells. They could also affect the hormones that spur or tamp down the appetite. In other words, chemicals ingested on Tuesday might promote more fat retention on Wednesday. It’s also possible that chemical disrupters could affect people’s body chemistry on longer timescales — starting, for instance, before their birth. Contrary to its popular image of serene imperturbability, a developing foetus is in fact acutely sensitive to the environment into which it will be born, and a key source of information about that environment is the nutrition it gets via the umbilical cord. As David J P Barker, professor of clinical epidemiology of the University of Southampton, noted some 20 years ago, where mothers have gone hungry, their offspring are at a greater risk of obesity. The prenatal environment, Barker argued, tunes the children’s metabolism for a life of scarcity, preparing them to store fat whenever they can, to get them through periods of want. If those spells of scarcity never materialise, the child’s proneness to fat storage ceases to be an advantage. The 40,000 babies gestated during Holland’s ‘Hunger Winter’ of 1944-1945 grew up to have more obesity, more diabetes and more heart trouble than their compatriots who developed without the influence of war-induced starvation. It’s possible that widespread electrification is promoting obesity by making humans eat at night, when our ancestors were asleep Just to double down on the complexity of the question, a number of researchers also think that industrial compounds might be affecting these signals. For example, Bruce Blumberg, professor of developmental and cell biology at the University of California, Irvine, has found that pregnant mice exposed to organotins (tin-based chemical compounds that are used in a wide variety of industries) will have heavier offspring than mice in the same lab who were not so exposed. In other words, the chemicals might be changing the signal that the developing foetus uses to set its metabolism. More disturbingly, there is evidence that this ‘foetal programming’ could last more than one generation. A good predictor of your birth weight, for instance, is your mother’s weight at her birth. Lurking behind these prime suspects, there are the fugitive possibilities — what David Allison and another band of co-authors recently called the ‘roads less travelled’ of obesity research. For example, consider the increased control civilisation gives people over the temperature of their surroundings. There is a ‘thermoneutral zone’ in which a human body can maintain its normal internal temperature without expending energy. Outside this zone, when it’s hot enough to make you sweat or cold enough to make you shiver, the body has to expend energy to maintain homeostasis. Temperatures above and below the neutral zone have been shown to cause both humans and animals to burn fat, and hotter conditions also have an indirect effect: they make people eat less. A restaurant on a warm day whose air conditioning breaks down will see a sharp decline in sales (yes, someone did a study). Perhaps we are getting fatter in part because our heaters and air conditioners are keeping us in the thermoneutral zone. And what about light? A study by Laura Fonken and colleagues at the Ohio State University in Columbus, published in 2010 in the Proceedings of the National Academy of Sciences, reported that mice exposed to extra light (experiencing either no dark at all or a sort of semidarkness instead of total night) put on nearly 50 per cent more weight than mice fed the same diet who lived on a normal night-day cycle of alternating light and dark. This effect might be due to the constant light robbing the rodents of their natural cues about when to eat. Wild mice eat at night, but night-deprived mice might have been eating during the day, at the ‘wrong’ time physiologically. It’s possible that widespread electrification is promoting obesity by making humans eat at night, when our ancestors were asleep. There is also the possibility that obesity could quite literally be contagious. A virus called Ad-36, known for causing eye and respiratory infections in people, also has the curious property of causing weight gain in chickens, rats, mice and monkeys. Of course, it would be unethical to test for this effect on humans, but it is now known that antibodies to the virus are found in a much higher percentage of obese people than in people of normal weight. A research review by Tomohide Yamada and colleagues at the University of Tokyo in Japan, published last year in the journal PLoS One, found that people who had been infected with Ad-36 had significantly higher BMI than those who hadn’t. As with viruses, so with bacteria. Experiments by Lee Kaplan and colleagues at Massachusetts General Hospital in Boston earlier this year found that bacteria from mice that have lost weight will, when placed in other mice, apparently cause those mice to lose weight, too. And a study in humans by Ruchi Mathur and colleagues at the Cedars-Sinai Medical Center in Los Angeles, published in the Journal of Clinical Endocrinology and Metabolism earlier this year, found that those who were overweight were more likely than others to have elevated populations of a gut microorganisms called Methanobrevibacter smithii. The researchers speculated that these organisms might in fact be especially good at digesting food, yielding up more nutrients and thus contributing to weight gain. The researcher who first posited a viral connection in 1992 — he had noticed that the chickens in India that were dead of an adenovirus infection were plump instead of gaunt — was Nikhil Dhurandhar, now a professor at the Pennington Biomedical Research Centre in Louisiana. He has proposed a catchy term for the spread of excess weight via bugs and viruses: ‘infectobesity’. No one has claimed, or should claim, that any of these ‘roads less taken’ is the one true cause of obesity, to drive out the false idol of individual choice. Neither should we imagine that the existence of alternative theories means that governments can stop trying to forestall a major public-health menace. These theories are important for a different reason. Their very existence — the fact that they are plausible, with some supporting evidence and suggestions for further research — gives the lie to the notion that obesity is a closed question, on which science has pronounced its final word. It might be that every one of the ‘roads less travelled’ contributes to global obesity; it might be that some do in some places and not in others. The openness of the issue makes it clear that obesity isn’t a simple school physics experiment. We are increasingly understanding that attributing obesity to personal responsibility is very simplistic This is the theme of perhaps the most epic of the alternative theories of obesity, put forward by Jonathan C K Wells. As I understand his view, obesity is like poverty, or financial booms and busts, or war — a large-scale development that no one deliberately intends, but which emerges out of the millions of separate acts that together make human history. His model suggests that the best Russian novelist to invoke when thinking about obesity isn’t Dostoyevsky, with his self-punishing anguish, but Leo Tolstoy, with his vast perspective on the forces of history. In Wells’s theory, the claim that individual choice drives worldwide weight gain is an illusion — like the illusion that individuals can captain their fates independent of history. In reality, Tolstoy wrote at the end of War and Peace (1869), we are moved by social forces we do not perceive, just as the Earth moves through space, driven by physical forces we do not feel. Such is the tenor of Wells’s explanation for modern obesity. Its root cause, he proposed last year in the American Journal of Human Biology, is nothing less than the history of capitalism. I will paraphrase Wells’s intricate argument (the only one I’ve ever read that references both receptor pathways for leptin and data on the size of the Indian economy in the 18th century). It is a saga spanning many generations. Let’s start with a poor farmer growing food crops in a poor country in Africa or Asia. In a capitalistic quest for new markets and cheap materials and labour, Europeans take control of the economy in the late 18th or early 19th century. With taxes, fees and sometimes violent repression, their new system strongly ‘encourages’ the farmer and his neighbours to stop growing their own food and start cultivating some more marketable commodity instead – coffee for export, perhaps. Now that they aren’t growing food, the farmers must buy it. But since everyone is out to maximise profit, those who purchase the coffee crop strive to pay as little as possible, and so the farmers go hungry. Years later, when the farmer’s children go to work in factories, they confront the same logic: they too are paid as little as possible for their labour. By changing the farming system, capitalism first removes traditional protections against starvation, and then pushes many previously self-sufficient people into an economic niche where they aren’t paid enough to eat well. Eighty years later, the farmer’s descendants have risen out of the ranks of the poor and joined the fast-growing ranks of the world’s 21st-century middle-class consumers, thanks to globalisation and outsourcing. Capitalism welcomes them: these descendants are now prime targets to live the obesogenic life (the chemicals, the stress, the air conditioning, the elevators-instead-of-stairs) and to buy the kinds of foods and beverages that are ‘metabolic disturbers’. But that’s not the worst of it. As I’ve mentioned, the human body’s response to its nutrition can last a lifetime, and even be passed on to the next generation. If you or your parents – or their parents – were undernourished, you’re more likely to become obese in a food-rich environment. Moreover, obese people, when they have children, pass on changes in metabolism that can predispose the next generation to obesity as well. Like the children of underfed people, the children of the overfed have their metabolism set in ways that tend to promote obesity. This means that a past of undernutrition, combined with a present of overnutrition, is an obesity trap. Wells memorably calls this double-bind the ‘metabolic ghetto’, and you can’t escape it just by turning poor people into middle-class consumers: that turn to prosperity is precisely what triggers the trap. ‘Obesity,’ he writes, ‘like undernutrition, is thus fundamentally a state of malnutrition, in each case promoted by powerful profit-led manipulations of the global supply and quality of food.’ The trap is deeper than that, however. The ‘unifying logic of capitalism’, Wells continues, requires that food companies seek immediate profit and long-term success, and their optimal strategy for that involves encouraging people to choose foods that are most profitable to produce and sell — ‘both at the behavioural level, through advertising, price manipulations and restriction of choice, and at the physiological level through the enhancement of addictive properties of foods’ (by which he means those sugars and fats that make ‘metabolic disturber’ foods so habit-forming). In short, Wells told me via email, ‘We need to understand that we have not yet grasped how to address this situation, but we are increasingly understanding that attributing obesity to personal responsibility is very simplistic.’ Rather than harping on personal responsibility so much, Wells believes, we should be looking at the global economic system, seeking to reform it so that it promotes access to nutritious food for everyone. That is, admittedly, a tall order. But the argument is worth considering, if only as a bracing critique of our individual-responsibility ideology of fatness. What are we onlookers — non-activists, non-scientists — to make of these scientific debates? One possible response, of course, is to decide that no obesity policy is possible, because ‘science is undecided’. But this is a moron’s answer: science is never completely decided; it is always in a state of change and self-questioning, and it offers no final answers. There is never a moment in science when all doubts are gone and all questions settled, which is why ‘wait for settled science’ is an argument advanced by industries that want no interference with their status quo. Making policy, as the British politician Wayland Young once said, is ‘the art of taking good decisions on insufficient evidence’. Faced with signs of a massive public-health crisis in the making, governments are right to seek to do something, using the best information that science can render, in the full knowledge that science will have different information to offer in 10 or 20 years. The issue, rather, is whether the government policies and corporate business plans are in fact doing their best with the evidence they already have. Does the science justify assuming that obesity is a simple matter of individuals letting themselves eat too much? To the extent that it is, policies such as Japan’s mandatory waist-measuring and products like the Hapifork will be effective. If, on the other hand, there is more to obesity than simple thermodynamics, some of the billions spent on individual-centred policies and products may be being wasted. Time, in that case, to try some alternative policies based on alternative theories, and see how they fare. Today’s priests of obesity prevention proclaim with confidence and authority that they have the answer. So did Bruno Bettelheim in the 1950s, when he blamed autism on mothers with cold personalities. So, for that matter, did the clerics of 18th-century Lisbon, who blamed earthquakes on people’s sinful ways. History is not kind to authorities whose mistaken dogmas cause unnecessary suffering and pointless effort, while ignoring the real causes of trouble. And the history of the obesity era has yet to be written. | David Berreby | https://aeon.co//essays/blaming-individuals-for-obesity-may-be-altogether-wrong | |
Biology | Dolphins are smart, sociable predators. They don’t belong in captivity and they shouldn’t be used to ‘cure’ the ill | Imagine this. Jay, an eight-year-old autistic boy, whose behaviour has always been agitated and uncooperative, is smiling and splashing in the pool. A pair of bottlenose dolphins float next to him, supporting him in the water. Jay’s parents stand poolside as a staff member in the water engages him in visual games with colourful shapes. She asks him some questions, and Jay, captivated by his surroundings, begins to respond. He names the shapes, correctly, speaking his first words in months. With all this attention Jay is in high spirits; he appears more aware and alert than ever before. A quick, non-invasive EEG scan of his brain activity shows that it is indeed different from before the session. Jay’s parents, who had given up hope, are elated to have finally found a treatment that works for their son. They sign up for more sessions and cannot wait to get home and tell their friends about the experience. They are not surprised to find that dolphins have succeeded where mainstream physicians have not. Everyone believes that dolphins are special — altruistic, extra gentle with children, good-natured. And any concerns the parents might have had about the welfare of the dolphins have been allayed by assurances from the trainers that they are happy and accustomed to the role they are playing. After all, as the parents can see for themselves, the dolphins are smiling. ‘Jay’ is a composite character drawn from the dozens of testimonials that appear on dolphin-assisted therapy (DAT) websites, but stories like his, stories about the extraordinary powers of dolphins, have been told since ancient times. Much of our attraction to these creatures derives from their appealing combination of intelligence and communicativeness, and the mystery associated with the fact that they inhabit a hidden underwater environment. Dolphins are the Other we’ve always wanted to commune with. And their ‘smile’, which is not a smile at all, but an anatomical illusion arising from the physical configuration of their jaws, has led to the illusion that dolphins are always jovial and contented, compounding mythological beliefs that they hold the key to the secret of happiness. The mythic belief in dolphins as healers has been reiterated down the ages from the first written records of encounters with these animals. In Greco-Roman times, dolphins were closely linked with the gods. Delphinus was a favourite messenger of Poseidon, who repaid him for his loyalty by placing an image of a dolphin in the stars. The Greek poet Oppian of Silica declared around 200 CE that ‘Diviner than the Dolphin is nothing yet created.’ Aristotle was the first to recognise that dolphins are mammals. Indeed, the root of the word dolphin, delphus, means womb, and underscores the long-standing belief in an intimate (even chimeric) connection between dolphins and humans. In ancient Rome and Mesopotamia, dolphins adorned frescoes, artwork, jewellery and coins, and in ancient Greece the killing of a dolphin was punishable by death. The Minoan palace of Knossos on Crete, dated to 1900—1300BC, contains one of the earliest and best-known ornamentations depicting dolphins in a fresco on the wall of the queen’s bathroom. In Greek mythology, Taras, son of Poseidon, was said to have been rescued from a shipwreck by a dolphin sent by his father, hence the image of the boy on a dolphin depicted on historical coinage. The perception of dolphins as lifesavers is connected with beliefs that they possess magical powers that can be used for healing. The ancient Celts attributed special healing powers to dolphins, as did the Norse. Throughout time, people as far apart as Brazil and Fiji have traded in dolphin and whale body parts for medicinal and totemic purposes. Despite being saddled with these dubious supernatural attributes, there actually are several well-substantiated modern reports of dolphins coming to the aid of humans. In 2007, for example, a pod of bottlenose dolphins saved the surfer Todd Endris, who had been mauled by a great white shark off Monterey, by forming a protective ring around him, which allowed him to get to shore. But these instances are related to dolphins’ ability to generalise their natural anti-predator behaviours to another species, not to anything supernatural. The intelligence and sophistication of dolphins is not just mythological, of course. Decades of scientific research has confirmed that they possess large and highly elaborate brains, prodigious cognitive capacities, demonstrable self-awareness, complex societies, even cultural traditions. In 2001 my colleague Diana Reiss and I provided the first definitive evidence for mirror self-recognition in two bottlenose dolphins at the New York Aquarium. Published in the Proceedings of the National Academy of Sciences, this study demonstrated, along with many others since, that dolphins have a level of self-awareness not unlike our own. Yet in the face of this evidence for their very real brainpower, dolphins have been imbued with religious and supernatural qualities and remade into the ultimate New Age icon. Margaret Howe spent 10 weeks living with a dolphin named Peter in a tank rigged up to contain just enough water for the dolphin to swim in and for Howe to wade in The person most responsible for fuelling modern, New Age notions of dolphins as morally superior spiritual healers is the late neuroscientist John C Lilly, who pioneered research with captive dolphins in the 1960s. Lilly’s early work on dolphin brains and behaviour, conducted in laboratories in the US Virgin Islands and in Miami, was groundbreaking, bringing to light important knowledge about the species’ large, complex brains and keen intelligence. Lilly also provided evidence for dolphin sophistication in the realm of communication, reporting that dolphins could mimic the rhythm of human speech patterns. In a paper published in Science in 1961, Lilly reported in detail on the range of ‘vocal’ exchanges between two dolphins in adjacent tanks, each equipped with a transmitter and receiver — Lilly’s dolphin ‘telephone’ — and noted how their ‘conversation’ followed polite rules; for example, when one ‘spoke’, the other was quiet. Lilly drew up a dolphin lexicon showing that dolphins used a variety of communication methods, from blowing and whistling to clicking. Convinced that dolphins had a sophisticated language of their own, he suggested that the species might provide the key to unlocking humanity’s potential to commune with extraterrestrials. He became part of the initial SETI (Search for Extraterrestrial Intelligence) group of radio-astronomy pioneers, who were so impressed with his tales of dolphin intelligence that they voted to call themselves ‘The Order of the Dolphin’. However, Lilly and his followers eventually began mixing their own quasi-spiritual beliefs with their scientific work. They also began engaging in scientifically and ethically questionable research, including giving captive dolphins doses of LSD. In one ethically dubious experiment dating from 1965, Lilly’s research assistant Margaret Howe spent 10 weeks living with a dolphin named Peter in a tank rigged up to contain just enough water for the dolphin to swim in and for Howe to wade in. Within weeks, it became clear that Peter was less interested in Howe as a room mate than as a conjugal mate, and to stave off his increasingly aggressive behaviour, Lilly encouraged Howe to relieve the dolphin’s erections. Lilly’s claims about the superior nature of dolphin spiritual and moral qualities soared well beyond any legitimate data. ‘We can presume that they have ethics, morals and regard for one another much more highly developed than does the human species,’ he wrote in The Dyadic Cyclone (1976). On the back of this conviction, he attempted to set up a formal but overly expansive programme of interspecies communication and co-operation between humans and dolphins called the Cetacean Nation, which was, needless to say, never fully realised. Despite (or perhaps because of) his controversial activities, Lilly became a counter-cultural guru and was very influential in promoting the use of dolphins in captive research. His informal studies of dolphins interacting with autistic children led him to make outrageous claims about the psychic powers of dolphins, which have since become the basis for many pseudoscientific claims made by DAT facilities. Dolphins and whales were first captured for public display by the circus mogul P T Barnum, who kept wild-caught beluga whales in an aquarium at his museum in New York City in the 1840s and ’50s. Then, as now, dolphins did not survive well in captivity, yet the popularity of dolphin displays, in which trainers engaged in increasingly daring aquatic gymnastics, grew dramatically, especially in the 1960s and ’70s. A key influence here was the US television series Flipper, dubbed an ‘aquatic Lassie’ and originally broadcast in 1964. It featured a bottlenose dolphin who lived in a cove and helped his human pals — two boys, named Sandy and Bud — to save people from mortal danger. But if Flipper was a boon to captive displays and increased public demand for dolphins, it also sparked concerns over their welfare. Marine parks responded swiftly by rebranding themselves as centres of education, research and conservation, rather than just entertainment. And the shows continued. The public’s enthrallment with dolphins and whales drives enthusiasm for aquariums and theme parks to this day. In the US alone, more than 50 million people visit captive facilities every year. Dolphin and whale shows have become increasingly extravagant, involving many different species, acrobatic interactions between trainers and animals and set designs to rival a Broadway show. Swimming with dolphins (SWD) programmes have emerged as a critical, and lucrative, component of the dolphin entertainment industry. Although some commercial operations offer opportunities to swim with wild dolphins, most SWD customers swim with captive dolphins in the convenience of concrete tanks. These SWD programmes emerged in the 1980s, and while there were just four SWD programmes in the US in 1990, now as many as 18 facilities offer dolphin ‘encounter’ programmes of one kind or another. Many people describe their in-water encounter with a dolphin as one of the most exhilarating and transformative experiences they’ve ever had — even the highlight of their life. Others report feeling a sense of euphoria and intimate kinship with the dolphins, little doubting that this feeling is shared by the dolphins. In many ways, it was only a matter of time before the concept of dolphin-assisted therapy emerged as an enhanced version of SWD programmes, underpinned, once again, by healing theories derived from dolphin mythology, and by theme parks marketing themselves as places of science and education. DAT took off in earnest when Lilly’s early explorations became better known through the efforts of the educational anthropologist Betsy Smith, then at Florida International University. In 1971, Smith let her mentally disabled brother wade into the water with two adolescent dolphins. She noted that the dolphins treated him tenderly: she believed that they knew her brother was disabled and were attempting to soothe him. Soon after, Smith established therapy programmes at two facilities in Florida, and offered them free of charge for many years. But she later concluded that DAT programmes were ineffective and exploitative of both the dolphins and the human patients, and in 2003 she publicly denounced them, calling them ‘cynical and deceptive’. DAT typically involves several sessions either swimming or interacting with captive dolphins, often alongside more conventional therapeutic tasks, such as puzzle-solving or motor exercises. The standard price of DAT sessions, whose practitioners are not required by law to receive any special training or certification, is exorbitant, reaching into the thousands of dollars. It has become a highly lucrative international business, with facilities in Mexico, Israel, Russia, Japan, China and the Bahamas, as well as the US. DAT practitioners claim to be particularly successful in treating depression and motor disorders, as well as childhood autism. But DAT is sometimes less scrupulously advertised as being effective with a range of other disorders, from cancer to infections, to developmental delays. Thousands of families visit DAT facilities and end up gaining nothing that they could not have gained from interacting with a puppy While not always promising a cure, DAT facilities clearly market themselves as offering real therapy as opposed to recreation. Under minimal standards, authentic therapy must have some relationship to a specific condition and result in measurable remedial effects. By contrast, DAT proponents cite evidence that is, more accurately, anecdotal, offering a range of explanations for its purported efficacy, from increased concentration to brainwave changes, to the positive physiological effects of echolocation (high-frequency dolphin sonar) on the human body. Parents of autistic children and others who appear to benefit from DAT believe that these explanations are scientifically plausible. The photos of smiling children and the emotional testimonials from once-desperate parents are hard to resist. Even those sceptical of DAT’s scientific validity often just shrug and say: ‘What’s the harm?’ In the worst-case scenario a child who typically knows little enjoyment and accomplishment in life can find joy, a little bit of self-efficacy and connection with others for what is sometimes the first time in his life. But amid all the self-justification, the question most often left out is: what about the dolphins? DAT facilities will often post testimonials from enthusiastic parents on their websites, some of which are recorded just minutes after the session ended, when parents are feeling most hopeful. These websites attract other parents who are desperate to find cures for their own children. They come away impressed with the ‘evidence’ that DAT can improve their children’s lives, and the apparently scientific approach of the staff. It all looks so promising, and so they figure it’s worth the plane fares, the time off work, and the high price tag. Meanwhile, many of the parents featured in the enthusiastic testimonials return home to renewed disappointment. Their children fall back into their regular routine, and fall silent again. At first, cognitive dissonance will not allow these parents to consider the possibility that they’ve wasted their money. But later they recognise that nothing has changed, and that the initial improvement was due to the excitement of the trip, and all the personal attention their child received. Many families visit DAT facilities and end up gaining little more than they would have done from interacting with a puppy. Equally sad are the lives of the dolphins. Hidden behind their smile, and therefore largely invisible to patients and vacationers, captive dolphins spend their lives under tremendous stress as they struggle to adapt to an environment that, physically, socially and psychologically, is drastically different from the wild. The results are devastating. Stress leads to immune system dysfunction. Often they die from gastric ulcers, infections and other stress and immune-related diseases, not helped by their sometimes being given laxatives and antidepressants that are delivered in their food. The worst of it, perhaps, is that there is absolutely no evidence for DAT’s therapeutic effectiveness. At best, there might be short-term gains attributable to the feel-good effects of being in a novel environment and the placebo boost of having positive expectations. Nothing more. Any apparent improvement in children with autism, people with depression, and others is as much an illusion as the ‘smile’ of the dolphin. While there exist numerous published studies purporting to demonstrate positive results from DAT, none so far has controlled for feel-good and placebo effects. Most don’t even include a minimal control group, which would provide some measure of whether even general short-term feel-good effects are due to the dolphin or to other salient factors, such as being in the water, being given conventional tasks, getting increased attention from others, and so forth. Because none of these components of the DAT situation are disentangled, there remains no credibility to the claim that DAT offers effective therapy. DAT clients are often among the most vulnerable members of society, so the industry takes advantage of them. The pseudoscientific patina and untested testimonials serve to reel in desperate parents and people suffering with severe anxiety or depression who will do anything to get some relief. They are persuaded by words such as ‘treatment’ and ‘therapy’ and by the misuse of scientific methods, such as EEG to measure brainwave patterns, which suggest scientific legitimacy. The consequences are potentially dire. Despite the mythology, dolphins can be aggressive. Even Lilly acknowledged that their teeth were sharp enough to snap a 6ft barracuda clean in two. A number of participants in SWD and DAT programmes have been seriously harmed by these large, wild predators, sustaining injuries ranging from a ruptured spleen to broken ribs and near-drownings. In one example from 2012 at an Isla Mujeres resort, off Cancún, one of the dolphins in a SWD programme bit a woman who was on honeymoon. ‘I felt the dolphin had my whole thigh in his mouth and then I realised I had been bitten, and it was very painful,’ Sabina Cadbrand told reporters when she got home to Sweden. Two other people were bitten in the same incident, including a middle-aged woman whose wound went right down to the bone. Though it might not chime with New Age dolphin lore, the reality is that dolphins, even those born in captivity, are wild. Parents who would never place their child in a cage with a lion or an elephant seem to think nothing of placing them at very real risk (of both injury and disease) in a tank with a dolphin. Only last year, an eight-year-old girl had her hand bitten at Sea World, Orlando, while feeding a dolphin. The public is largely unaware of the consequences, because aggressive or dying dolphins and whales are often quietly replaced by others taken from the wild or transferred from another facility. Though the original star orca whale Shamu spent just six years in captivity in SeaWorld San Diego, dying in 1971, the name ‘Shamu’ has been used for different orcas in shows ever since, leading to the perception that the original Shamu is alive and well and enjoying longevity in captivity. I’ve conducted research with many captive dolphins over the years, most of which died prematurely. Presley and Tab, the two young dolphins that starred in my mirror self-recognition study, were later transferred to new facilities and perished shortly afterwards. Their deaths were especially hard for me to rationalise, because my own study had shown them to be self-aware creatures. They convinced me, several years ago, never to return to captive studies, and to channel the bulk of my energies into campaigning for dolphin protection and freedom. I understand that desperate people will continue to visit DAT facilities for help with their own illnesses. Sadly, they may never realise that the dolphins they seek help from are likely to be as psychologically and physically traumatised as they are. | Lori Marino | https://aeon.co//essays/dolphin-therapy-doesn-t-work-for-the-child-or-the-animal | |
Cosmopolitanism | Contrary to popular belief, migration from Muslim countries is one reason why Europe is becoming more secular, not less | The European relationship between religion, law and politics is a strange creature. Religious influence over political life is weaker in Europe than in almost any other part of the world. To adapt the phrase first used by Alastair Campbell when he was spokesman for the British prime minister Tony Blair, politicians in Europe generally ‘don’t do God’. The EU’s Eurobarometer surveys of public opinion suggest that religion has a very limited impact on the political values and behaviour of European voters. Europe has no equivalent to the politically powerful religious right in America, nor to the theological debates in the political arena that one sees in many Islamic countries. Recently, however, this long-standing distance between religion and politics has been threatened. Migration is one factor that has helped religion to return to centre stage in public life. While Muslim minorities have protested over questions of blasphemy and free speech, Catholic leaders have intervened in political debates about gay marriage and abortion, and conservatives have lamented that European societies are losing touch with their Christian past. The political scientist Eric Kaufmann has argued that religious believers have a demographic advantage in birth rates that will see Europe’s secularisation reversed by the end of this century. Religious justifications for terrorism might be the most visible and dramatic threat to liberal states from increased religiosity, but the separation of religion and politics has recently been challenged in multiple ways and in many countries, not just in Europe. Both the US and Canada have experienced controversies over the attempted use of religious law in family arbitration, while Islamic leaders in Australia have provoked intense debate after giving sermons denouncing gender equality. However, the renewed visibility of religion in public affairs provokes particularly intense challenges in Europe since it undermines well-established, but often tacit, conventions on the limits to religious influence on public life. Secularism in Europe has been in part influenced by the original recognition in Christian theology of separate secular and religious realms (the Bible’s injunction to ‘render unto Caesar’). But the distinctive European ‘settlement’ on religion stems from the religious wars of the 16th and 17th centuries. The suffering caused by these conflicts across western and northern Europe brought a strong desire for political norms and structures that could end the misery and instability caused by religious contestation for political power. The Peace of Westphalia — a series of treaties concluded in 1648 — established the principle that sovereign states would respect each other’s boundaries and differing state religions. This acceptance of the permanence and legitimacy of religious diversity between (if not within) European states combined with the work of thinkers such as Grotius, Hobbes, Locke and Hume to provide Europe with ways of thinking and speaking about politics that were separate from religion. Religious bodies in Europe have more limited political influence than in most of the rest of the world, but this has generally been a cultural norm rather than a legal or constitutional principle. In the modern postwar period there has been an expectation that religions will keep their distance from politics. Of course, the churches and other religious institutions have not stayed out of European politics entirely: in France, last month’s bill on gay marriage was vigorously opposed by the Catholic Church. Even on this issue, religious influence is notably weak in Europe, where legal recognition of gay marriage is more widespread than any other part of the world. To assume that there is a simple separation between religion and state here in Europe, or that religion has no political power, would, however, be to misunderstand European history. The weak political influence of religion in Europe has been accompanied by considerable cultural ties and legal links between particular churches and individual European states, and these are reflected in many residual echoes of religious influence and privilege in public life. The populations of most European states have a clear majority of one particular denomination of Christianity. This means that, until recently, to be of a particular nationality usually meant to belong to a particular religion: to be Spanish was to be Catholic; to be Swedish, Lutheran; to be Greek, Greek Orthodox; and so on. The overlap between religious and national identity meant that the symbols and other elements of a country’s predominant religion played a significant part in public life, and in many cases still do. In this sense, European secular states are very different from the principle of separating church and state in the US, a nation built on religious pluralism, even if mostly under a Christian rubric. Several European countries recognise official state religions (including the Anglican Church in England), while the constitutions of others invoke Christianity. Even where there is no official state religion, the influence of the dominant form of Christianity is visible in many areas of public life. Church taxes are levied by the state on behalf of religious denominations in many countries, the government funds a range of religious schools and hospitals and, in most European states, the working calendar remains structured around Christian festivals. Many European countries have the cross as part of their national flag and religious festivals such as St Patrick’s Day double as national festivals. Indeed, not a single European state has institutional arrangements that would satisfy the requirements of the US Constitution, which prohibits symbolic or financial endorsement of religion by the state. In a more diverse Europe, the Christian flavour of public institutions is becoming increasingly controversial The less-than-totally-secular nature of Europe’s church-state arrangements goes beyond mere symbols. Concrete legal privileges are retained by religions, most notably in the area of free speech where a range of countries retain laws restricting antireligious speech, either by blasphemy laws or laws restricting insult or ridicule of religion. It is this residual Christian identity in public life that has become so contested by the pluralism of postwar European society. Migration has pushed religion back to the centre of public debate, but has also placed pressure on the remaining legal and symbolic privileges held by Christianity in European states, pressure that may well have the effect of banning religion from legal and political life altogether. In the past, religion in Europe has played a role somewhat like that of the modern British monarchy. On paper, the British monarch is both a national symbol and the holder of key political and legal powers. However, the powers theoretically held by the monarch — such as the right to nominate a prime minister and refuse to sign legislation — are subject to shared understandings that they will not be used in normal circumstances. Imagine if there were a substantial minority population in the UK who believed that the monarch ought to exercise significant political power — perhaps a substantial immigrant population who arrived with a pre-existing commitment to monarchical government. This would create pressure to remove those symbolic, largely unused powers. This is exactly what is happening to the residual influence and presence of religion in the European political and legal sphere. As populations of European states have become more religiously diverse, the ability of a particular faith to act as part of a shared national identity has diminished. In part this is because there are many ethnic communities who do not share Christian cultural loyalties, but it is also because numbers of self-declared atheists and agnostics are rising rapidly at the same time. The UK census of 2011 showed a surge in the percentage of people who said they had no religion from 15 per cent to 25 per cent. Previously, many of those who are not particularly religious were content to describe themselves as Christian on cultural grounds: in Europe, numbers of such nominal Christians have long exceeded those who profess belief in the core tenets of the Christian faith. But as religion and national identity have gradually begun to separate, religious identity becomes more a question of ideology and belief than membership of a national community. This has encouraged those who are not true believers to move from a nominal Christian identity to a more clearly non-religious one. Once nationality is no longer synonymous with a particular religious denomination, the symbols of that religion lose their ability to act as uncontroversial national cultural symbols. Where they might once have been shared, now such symbols become highly contested. For example, in Ireland in 2007, Ravinder Singh Oberoi, a Sikh, challenged the uniform rules of the Garda Reserve, the volunteer force within the Irish police, to allow him to wear a turban; in San Marino in 1999, three incoming MPs went to the European Court over the traditional oath they were required to make, on the basis that its reference to ‘the holy Gospels’ violated the rights of non-Christian deputies; and in Italy in 2011, Soile Lautsi, an atheist mother, took her children’s school to the European Court of Human Rights for displaying a crucifix in the classrooms. In the UK, the National Secular Society has taken legal action to challenge the practice of saying prayers before local council meetings and in state schools. Yet these challenges to religious symbolism in public have not all been successful. Last month, the High Court in Ireland refused to allow Mr Oberoi to wear a turban while on duty with the Garda Reserve, on the basis that the police force must be religiously neutral (even as the badge of that same force is based on imagery of Celtic Christian monastic art, which is seen as an important part of Irish national cultural heritage). Mrs Lautsi’s initial victory in Strasbourg was reversed on appeal on the basis that the ‘passive symbol’ of the cross on a classroom wall was not sufficiently indoctrinating to trigger the intervention of the European Court. And a victory in court for the National Secular Society in its challenge to council prayers was followed by political defeat as the UK government legislated to reverse that decision. Nevertheless, the proliferation of challenges to these residually Christian symbols in public life shows how, in a more diverse Europe, the Christian flavour of public institutions is becoming increasingly controversial. The National Front in France has discovered a love for secularism that it did not have before secularism became a stick with which to beat immigrant populations At the same time, the assertive expression of religious values, dress and other symbols by non-Christian communities has been equally influential. Olivier Roy, professor at the European University Institute in Italy and a well-known scholar of European Islam, has noted how suspicion and fear has been created in Europe by ‘the emergence of new communities of believers who do not feel bound by the compromises laboriously developed over the past centuries between the religious and the secular’. These fears are driving a process that formalises and restricts the role of religion and its privileges in public life. In some countries, the visible nature of the religious symbols of Muslim communities has provoked national governments to restrict the wearing of all religious symbols in public, including Christian and Jewish ones. In 2004, France banned all ‘ostentatious’ religious symbols in state schools; the same year, the state of Berlin proposed a ban on all religious symbols in state offices; while in 2008, Denmark moved to ban religious symbols from courtrooms. These have been followed by more wide-ranging prohibitions on facial veils in both France and Belgium. While these measures have in large part been motivated by a desire to restrict the wearing of Muslim symbols, their effect, in many cases, is to remove all religious symbols, thus intensifying the secularisation of public spaces, and pushing religion further into the private sphere. In the political arena, cultural norms that made it simply ‘bad form’ to bring religion into politics are also being replaced with more black-and-white legal rules. For centuries, the UK was content to have an anti-blasphemy law on the books just as long as it was understood that it was not to be invoked to unduly restrict speech on religious matters. The Satanic Verses affair of 1989 and its echoes in the Danish cartoons controversy of 2005 showed that some citizens of Europe did not share this tacit consensus and had altogether different ideas about what ought to constitute legally actionable blasphemy or unacceptable criticism of religion. Yet the result has been the opposite of what religious protesters might have hoped: in the UK, the legal response was not to broaden the scope of blasphemy but, in 2008, to abolish the law altogether. Similarly, in Ireland, a revision of the offence of blasphemy in 2009 inserted a clause specifying that no crime would be committed where the defendant could prove ‘genuine literary, artistic, political, scientific or academic value’. Likewise, states including France, Austria, the Netherlands, Germany and the UK have introduced integration tests that require prospective citizens to indicate that they are aware of, or in some cases actively accept, the separation of religion and politics, as well as principles such as gender equality and gay rights. In France in 2010, the authorities rejected the citizenship application of a Muslim man who refused to allow his wife to speak or leave the house without his permission, and in 2008 the French courts upheld their earlier ruling to refuse citizenship to a Muslim woman whose ‘radical practice of her religion’ included wearing the face veil on the grounds that it was ‘incompatible with the values of the Republic’ such as gender equality. Many of those supporting such tests have done so out of xenophobia and bigotry against Muslim migrants — the National Front in France has discovered a love for secularism that it did not have before secularism became a stick with which to beat immigrant populations. But others have supported these developments out of a genuine commitment to liberalism, feminism and the separation of religion from politics that has served Europe well in the past. There will certainly be costs to this process. Some will feel a sense of loss for the connection to history that ancient Christian symbols and rituals can provide. The flexibility of the old, informal social contract could be missed by religious individuals and institutions chafing at rules that make the strict separation of religion, law and politics explicit. However, it is difficult to see what European states can do apart from formalise the separation between religion and the state. Historically, European secularism emerged as a means to manage the danger of conflict that religious diversity brings — at the time this was conflict between varieties of Christianity. As the range of religious and non-religious identities in Europe continues to expand, intensified secularisation of the public sphere is the likely, and desirable, result. To take the opposite tack, and invite religion more fully into legal and political life, would be risky. As the German philosopher Jürgen Habermas argues, failing to restrict religious influence over politics risks a degeneration into religious contests for political power. The intellectual historian Mark Lilla, professor of humanities at Columbia University, argues that separation of religion and politics is a product of a chance combination of historically specific factors, and anything but inevitable: it cannot be taken for granted. It has encouraged the development of an ideal of shared citizenship in religiously diverse populations and has been crucial to the advance of liberal principles such as gender equality and gay rights. The clarification of limits on the role of religion in law and politics, if fairly applied, could help to alleviate the sense of double standards and unfairness that many migrants and their naturalised descendants feel. European secularism will be harder to portray merely as disguised Christian privilege, or free speech as an excuse to undermine Islam, once it is clear that established Christian faiths are not exempt from these norms. Either way, what we see is a general process under which greater religious diversity is making it difficult for religion in Europe to retain the residual political and symbolic roles that it has had until now. These roles relied on religion being seen as a national cultural symbol, and on implicit understandings that churches would largely steer clear of politics and would not use their legally privileged status to restrict criticism or mockery of religion to too great a degree. Such a system is proving unsustainable. There are now too many diverse cultural expectations about religion, its role in political life, and the degree to which it can be criticised or mocked. The more muscular religiosity of some migrant communities, among other factors, is provoking European governments to restrict religion firmly to the private sphere, and to render the public sphere a strictly secular one. Perhaps, as Giuseppe di Lampedusa wrote in his novel The Leopard (1958), ‘everything must change so that everything can remain the same.’ | Ronan McCrea | https://aeon.co//essays/is-migration-making-europe-more-secular | |
Architecture | Gardens expand our thinking. At times of crisis they console, school us in emotional generosity, and show us that life goes on | The doctor had a calm, sing-song voice. Every word was measured and mild. The only jarring note was sartorial: his navy blue striped shirt and gold cufflinks clashed with the Miami Vice pastels of my wife’s quarantined hospital room. As if he were casually pulling down options from a computer menu, the specialist told Ruth that she might die in the next few months. ‘If your liver does not repair, you might get hepatic encephalopathy. The toxins normally filtered by your liver might damage your brain. This might cause drowsiness or apathy. It’s likely you’d then fall into a coma.’ Tethered to the bed — Ruth by intravenous drip, I by her shaking hand — we sketched blueprints for a life beyond hers. We imagined surreal posthumous plans for child-raising, finance and romance. She wanted me to love again, she said, her cheeks wet. ‘But not right away.’ She laughed. The phrase ‘life goes on’ is as vague as it is common. It has a sad absurdity to it. Our life did go on: there was still laundry to wash, dinners to cook, bills to pay. There was still the generosity (and sometimes frightened cruelty) of family. We wanted our life to go on. We wanted to leave the ‘sick country’ of jaundice, sanitised hand-kisses, and nurses in face masks, for the motherland of ordinary intimacy: of spousal tiffs and kids’ mess. After a week, we did migrate. Ruth came home, but she was quarantined: a tightly shut bedroom door between the lands of sickness and health. And two lots of laundry, dishes, meals: one for each territory. At the end of every day, I’d put the children to bed and collapse at my desk. Instead of the manuscript I’d promised to my publishers, I had a double domestic ‘to do’ list. One evening, towards the beginning of April, I was at my desk again. Exhausted and having just ticked off my daily tasks list, I propped my chin in my hands, and gazed, glassy-eyed, out of the window and into the garden. What I saw was something new, a fat burgundy camellia blossom. The day before there had been no flower. Just the usual phalanx of waxy shields. Now there was this rich red announcement: autumn was here. The Melbourne summer — this drought we Antipodeans call ‘climate’ — had not been kind to the camellia. Too many days of baking heat, cloudless skies, and hot northerly winds had burned the leaves and starved the roots. No country for old trees. And just when this exotic émigré, thin and brittle, looked like it was done for, it marked the calendar and bloomed: big, fat flowers, right on the first day of the fourth month. The garden offers a new kind of thinking — a necessary schooling in emotional generosity It was a strange consolation. My wife was no less weak. My blood pressure no lower. But in the camellia, I discovered a world of silent profligacy — life goes on, it seemed to say, and indifferently so. Its blossoming suggested a universe of regularity, opposed to one of duty or responsibility. The plant did not care about sterilised cutlery or quarantined laundry. This clockwork-like thing was the antithesis of everything I was and felt. It was an invitation to a more anaesthetised life: to look at my own anxiety, and my wife’s pain, with an automaton’s eyes. All gardens, not just mine, are rich terrain for philosophical and psychological fantasy. As fusions of humanity and nature, they are creations that reveal, literally and figuratively, what we make of nature. The camellia allowed me to ignore humanity and celebrate the ‘natural’, which at that time seemed an unthinking, unfeeling life. It represented my own longing to feel nothing. As my wife slept in her sealed sickroom, I needed this vision of a detached world, with its seemingly mechanical cycles. The numbness did not last, though. I didn’t actually want it to. But it offered temporary relief from my cares and commitments. Human necessity was replaced with a necessity more ‘natural’ — in the deist meaning of the word, as in Alexander Pope’s ‘vast chain of being’, from his poem ‘An Essay on Man’ (1734): All nature is but art, unknown to thee;All chance, direction, which thou canst not see;All discord, harmony not understood;All partial evil, universal good:And, spite of pride in erring reason’s spite,One truth is clear, whatever is, is right.Jane Austen, a keen fan of Pope (‘the one infallible Pope in the world’, as she quipped), discovered this deist consolation in her own Chawton garden. When Austen wrote to her sister Cassandra that an apricot had been ‘detected’, this was more than mere trivia. It was the recognition that, beyond the knotted ties of family, or the labours of fiction, the universe was ticking away nicely. It was not interested in the price of fish, or the status of the novel. It was not — in my case — listening to prayers or bribes. Instead, like Elizabeth Bennet’s Pemberley, this is a garden that invited, not soliloquy, but silence. Gardens shape our thinking and feeling. They offer not only floral beauty or shaded comfort, but also ideas. Many authors have sought, in parks and back yards, some intimation of the divine, or invitation to virtue. These intimations are as diverse as our interpretations of humanity and nature — to say nothing of variations in geography and climate, or temperament and mood. For example, while the philosopher Jean-Jacques Rousseau’s pathological selfishness leaves me cold, I can identify with his botanical curiosity. In his letters on botany, and in ‘Reveries of a Solitary Walker’ (1782), we see another side of Rousseau: less the paranoid misanthrope, more the Enlightenment naïf, holding back his egotism to better understand the needs of the sweet pea or daisy. I do not have Rousseau’s botanical mania, but the fascination is familiar: peering inside the shaft of the peace lily’s white bract, and trying to fathom its reproductive secrets. Where is the ovary, exactly? And why there? Why a spadix – a column covered by spiky florets – instead of a normal flower? And what is the small white flange at the spadix’s base? These questions asked me to put aside my human requirements and wondering, and explore those of the plant. It is a lesson in patient analysis, but also in a kind of generosity: imagining life other than my own. I needed less of this virtue while Ruth was ill. Harassed equally by domestic chores, children’s demands, and my own horror, I craved a break from human intimacy. I discovered, in the camellia’s blooms, a brief break from obligation and emotion. But Ruth is now recovered: her skin’s yellow has faded, her strength almost returned. And with this release from her dependence comes my familiar egocentrism: the covetous kid within, who mumbles ‘me, me, me’. I start to think less of others, and more of myself: my deadlines, antsy longing for solitude, and the grey zone between need and want. Because of this, the garden no longer needs to numb my sympathies as it did. Instead, it has become a necessary schooling in emotional generosity. This is the commitment, identified by Iris Murdoch, to really see beyond myself. ‘Goodness is connected,’ writes Murdoch in The Sovereignty of Good, ‘with the attempt to see the unself, to see and to respond to the real world in the light of a virtuous consciousness’. If the garden cannot make me ‘good’, it is an invitation to be better This means being more sensitive to the signs of flourishing in others. I stop walking to the front door in the midsummer’s scorching heat, reverse my steps, fill the watering can, and give the tomatoes a drink. Not because I want to eat them, or because their death is my failure, but because their browned, wilted leaves pull me out of my own longing for air-conditioned comfort. I might be dehydrated and grumpy — everyone is in Melbourne’s January heat. But it will not kill me to spend three minutes watering the dry, powdery garden bed. The stakes are higher for the poor fruits and vegetables. The garden has become an opportunity for me to expand my sympathies; to think and feel, as did Rousseau, of the demands of alien life. If the garden cannot make me ‘good’, it is, in this, an invitation to be better. And if not better, then at least less conceited about my own achievements or craving for control and mastery. The morning glory vines slowly cracking the window frames of this study; the couch grass invading the asbestos-sheeted greenhouse; the Salvia reaching out across the porch. All examples of human contrivance being continually undermined. This was the writer Leonard Woolf’s outlook, too. An avid gardener, he nonetheless wrote that in fact ‘nothing matters’, by which he meant that no achievement had any divine guarantor. No achievement was destined for immortality or eternal reward. The struggle to maintain some civilised order in the household was mirrored in statecraft and psychology: unrest and insanity were never far away. Woolf saw this, not only in the rabid jungles of Sri Lanka (where he was posted as a colonial administrator), and in his Sussex garden, but also in the mind of his beloved wife, Virginia. ‘Everyone,’ he wrote in his memoir Beginning Again (1964), ‘is slightly and incipiently insane.’ The point can be generalised: everyone is also slightly and incipiently ill. In Woolf’s vision, the garden is no easy consolation, but is instead a private reminder, a way to recognise the limitations on all human enterprise, and a caution against false expectations for worldly perfection and control. The comforts of a silent camellia might blunt this truth, but they cannot — and ought not, for the sake of a more lucid life — remove the thorns. | Damon Young | https://aeon.co//essays/gardens-cannot-make-us-good-but-they-invite-us-to-be-better | |
Medicine | Traditional Chinese medicine is an odd, dangerous mix of sense and nonsense. Can it survive in modern China? | A few minutes after getting her traditional Chinese medicine injection in a hospital in Chongqing, southwest China, 25 year-old Zhang Mingjuan began hyperventilating. She’d had only a slight fever, but wanted to try the appealing combination of traditional medicine with the more rapid vector of a jab. Now she felt like she was dying, and she passed out. In the hospital emergency room, where she awoke, she was told that quick treatment saved her life from the allergic reaction to the shot — a mixture of herbs and unlabelled antibiotics. Later, doctors told her that she would have been better off sticking to hot water and aspirin. The combination of traditional medicine and hospital setting, of pseudoscience and life-saving treatment, might seem strange. But in modern China, traditional Chinese medicine (TCM) is not the realm of private enthusiasts, spiritual advisers or folk healers. It’s been institutionalised, incorporated into the state medical system, given full backing in universities, and is administered by the state. In 2012, TCM institutes and firms received an extra $1 billion in government money, outside the regular budget. TCM as a whole is a $60 billion dollar industry in mainland China and Hong Kong. In pharmacies, TCM prescriptions are jumbled on the shelves alongside conventional drugs. Staff often see little difference between prescribing one or the other and don’t tell patients whether they’re receiving TCM or conventional treatment. Approximately 12 per cent of national health care services are provided by TCM facilities, although that figure includes conventional medicine done at TCM institutions. Every major Chinese city has a TCM hospital and university. While folk medicine shops have the cluttered appearance of an alchemists’ den, institutionalised TCM presents itself as clean, organised and scientific, with staff, even administrators, bustling around in white lab coats. The majority of TCM drugs are sold in foil packets and shiny capsules. Beautiful and intricate as they are, these theories don’t correspond with the messy realities of bodies cobbled together by the long randomness of evolution Yet the theoretical underpinnings of these treatments is essentially pre-modern. Traditional Chinese medical theories see the body as composed of the interaction of different elements, processes, and fluids: the elements of fire, water, earth, metal, and wood; the interplay of yin, yang, and ‘qi’ (the life force). Each of these comes with its own correspondences: fire matches the south, red, heat, the heart, and the tongue. The body is a microcosm that mirrors the macrocosm of the universe, a grand design reflected in each person’s form Illness arises when excesses disrupt the balance between the elements, manifesting in wind, fire, cold, dampness, dryness, and heat. Nature provides symbolic clues to treatments that can fix these imbalances: a herb that looks like the heart, the hand or the penis can be used to treat ailments in those body parts. Animals, too, carry cures within them: the roaring power of a tiger can be extracted from its bones; the strength of an ox from its gall stones. Many of these ideas are recognisable from pre-modern Western thought: the four humours of Galenic medicine or the principles of a monastic herbarium would not be foreign to Chinese healers. Nor would Leonardo da Vinci’s Vitruvian Man, limbs outstretched, and encapsulating in the human body the proportions of the universe, look out of place in a Chinese medical classic. Humans look for patterns, and for projections of themselves into the universe. Beautiful and intricate as they are, these theories don’t correspond with the messy realities of bodies cobbled together by the long randomness of evolution. The human body isn’t a mirror of cosmic realities any more than it is a perfectly designed machine, but a clumsy improvisation, full of incompetent or redundant parts. Traditional Chinese medical theory suffers from the same problems that Renaissance thinkers identified in astrology and other pre-modern sciences: ‘This idea is very pretty, rather than natural and true,’ as the 15th-century philosopher Giovanni Pico della Mirandola wrote. Like the bodily humours, the equally immeasurable yin, yang and qi, as well as the body’s ‘meridians’, belong to the realm of spiritual and psychological practice, not scientific inquiry. That doesn’t render them less real, but it’s a shaky basis for a biological theory. Still, even if traditional theories of medicine do not describe bodily reality, they are important in other ways. The idea that an illness might be a symptom of a lack of balance in our lives resonates powerfully. The idea of spirit in all its guises still infuses Western culture, but it can’t be measured any more than qi can. Nor any more easily dismissed: Robert Burton’s The Anatomy of Melancholy (1721) is suffused with humours, astrology and demons, but it’s still a book of wisdom and insight — both into our own minds and those of Burton’s day. The spiritual and psychological insights of past Chinese writers and thinkers on medicine are still meaningful, and they provide us with keys to other great texts of the Chinese past. Just as a student of Shakespeare needs an understanding of Galenic medicine — ‘I’ll curb her mad and headstrong humour,’ says the choleric Petruchio of his equally volatile new wife in The Taming of the Shrew — so a student of a Chinese classic such as The Dream of the Red Chamber (1791) needs an understanding of TCM. But though these insights might inform good medical practice, or the ways in which we treat our own bodies, they’re not a basis for science, or for reproducible treatment. For all that, TCM ideas suffuse Chinese popular thinking about health and there’s a fierce defensiveness associated with them. Opposition to TCM makes a person stand out, even when the critic is inside Chinese culture. In the ‘anti-TCM’ group on the social media site Douban, with its symbol of a crossed-out yin-yang sign, posters share experiences of bitter family arguments. Wu Meng, 25, is firmly opposed to the practice. ‘I really like [the popular scientific crusader] Fang Zhouzi’s books,’ she told me, ‘And anyone who thinks can see that TCM’s just rubbish, and not scientific at all. But even educated people believe in it. My boyfriend is in finance, and super-smart, but he has a whole drawer full of this crap. My mother is a [conventional] doctor, but my family thinks I’m just against TCM out of contrariness, and that I’ll change my mind.’ At the public level, opposition comes with greater costs. Zhang Gongyao, 56, started studying TCM in 1974, as a ‘peasant straight out of senior school. Because of the Cultural Revolution, I had lost my hope for a reliable future. So I studied and practised TCM in the hope of a reliable future.’ Over the years he lost faith in TCM, especially in the institutionalised system. He became a professor of philosophy, specialising in medical history, at the Central South University in Hunan, and in 2006 launched an online petition calling for the removal of TCM from the government-run medical system. Although it was signed by more than 10,000 people, it was dismissed by the State Administration of Traditional Chinese Medicine (SATCM) as a ‘farce’, with Zhang accused of being ‘ignorant’. At a time when virtually every other traditional practice was being consigned to the bonfires, TCM practitioners had some ideological and governmental shelter ‘Since then,’ he said, ‘I have borne a lot of pressure from the government, from the university, and from the existing TCM institutions. I can’t publish my papers freely; I’m blocked from the normal promotions and salary raises; and I can’t even always lecture to my students.’ Zhang’s fate is not unusual for anyone who challenges a government institution in China, whatever the area. But why has TCM retained such power and influence — both popular and official — when traditional medicine in China’s neighbours, such as Korea and Japan, has been pushed to the margins? The institutionalisation of TCM was not inevitable. It arose out of China’s damaged encounters with the West, out of the ideological struggles of the 1930s, and the political needs of the early People’s Republic. And like most traditions, from kilts to Christmas trees, it’s a lot younger than people think. Until the 19th century, there was no such thing as ‘Chinese’ medicine in China, just medicine. This encompassed an eclectic and often-changing range of treatments and practices that generally harked back to ancient medical texts, such as the Huangdi Neijing, the ‘Inner Classic of the Yellow Emperor’, but it was also willing to experiment and innovate. Like European medicine, it could be empirical and curious: the Neijing, for example, stresses the importance of taking case histories. Given that ideas were transmitted along the Silk Road from Europe, India and the Middle East, and vice versa, the resemblance between TCM and medieval European medicine is probably not all parallel development. When Chinese doctors first encountered European medical ideas, they did so as curious equals, willing to concede that the newcomers had some things right, but also recognising that other treatments and beliefs lagged behind Chinese practice. Before the mid-19th century, a patient was probably better off going to a Chinese doctor than a Western one; the odds of either being helpful were slim, but at least the Chinese doctor, thanks to a disdain for internal intervention, wouldn’t slice you open with unsterilised instruments. But as Western medical science was revolutionised by germ theory, by anaesthesia, and by public sanitation, the gulf between it and medicine in China widened, coinciding with China’s growing unease about its place in the world. Humiliated in the Opium Wars (1839-1842, and 1856-1860), threatened from without and collapsing from within, Chinese intellectuals struggled to see the path forward. For some, it meant harking back to lost greatness; for others, it meant abandoning the old for newly imported, superior methods. ‘Substantiate in detail the theory that Western methods all originate from China,’ asked one exam question for applicants to the civil service after the exams were revised in 1900, while ‘Explain why Western science studies are progressively refined and precise’ asked another. In 1890, the Qing scholar Yu Yue published a full broadside against tradition, On Abolishing Chinese Medicine, after losing his wife and children to illness. In 1896 Lu Xun, China’s greatest modern writer, watched his father die as the family’s wealth was squandered on increasingly expensive and rare traditional treatments; he later trained as a Western doctor in Japan in reaction against what he called the ‘unwitting or deliberate charlatans’ of traditional medicine. In one of his bleakest stories, ‘Medicine’ (1919), a family desperately seeks a magical cure in the blood of an executed rebel. The Nationalist government of the 1920s took a far greater interest in public health, seen as a vital part of China’s revival. Strong individual bodies meant a strong national body, one no longer seen as ‘the sick man of Asia’. With this came the need to organise and regulate doctors. But traditional and Western doctors had formed separate medical associations, each with a keen sense of their own importance. When, in 1929, the ministry of health proposed to abolish traditional medicine entirely, TCM doctors promptly called a nationwide strike, closing pharmacies and clinics across the country. As a result, two separate and parallel government institutions were created to deal with doctors — one ‘Chinese’ and one ‘Western’. Despite the government push to abolish TCM in 1929, in 1935 the Nationalist Party congress passed a resolution demanding ‘Equal treatment for Western and Chinese medicine.’ TCM’s claims of being ‘natural’ are highly appealing in country where everything from dumplings to baby milk can be toxic The new Communist government of 1949 retained this legal structure, even though Chairman Mao had no time for Chinese medicine, dismissing its practitioners as ‘circus entertainers, snake-oil salesmen or street hawkers’. Yet in a country devastated by war and badly short on doctors of any kind, the vast numbers of traditional healers, and the institutions and regulations already in place to manage them, were a valuable resource. It was the Communist government that coined the term TCM, formally founding the State Administration of Traditional Chinese Medicine in 1954, and establishing many new TCM universities and institutions in the next few years, where TCM was formally stripped of its most obvious ‘superstitious’ elements, such as astrology and phrenology. The relentless drumbeat was on ‘scientification’ — the belief that the huge range of traditional practices could be systematised into an alternative national theory to ‘Western medicine’, or even integrated into broader medical theory. Institutionalisation allowed TCM to survive the Cultural Revolution, and the earlier purges of traditional culture. At a time when virtually every other traditional practice — from religion to music to literature — was being consigned to the bonfires, TCM practitioners had some ideological and governmental shelter. Itinerant or independent practitioners outside of the umbrella of the SATCM were still humiliated and imprisoned, as were famous professors tainted by their pre-revolutionary practice. And TCM universities, like all other schools and universities, were closed for 10 years from 1966 to leave students free to take part in the ‘revolutionary struggle’. But by de-emphasising the ‘tradition’ part and emphasising its ‘Chinese-ness’, the advocates of TCM were able to tap into the enthusiasm for ‘people’s science’ and weather the storm. Xixi, 23, a master’s student at the Beijing University of TCM, explained her own decision to study TCM, as she peeked at me from behind pink glasses over a pink face mask. ‘I grew up in Shandong, the birthplace of Confucianism. And so I’ve always been interested in Confucian ideas, and in traditional culture in general. I like the idea that “the one thing is connected to the hundred things”. My parents didn’t have the chance to explore traditional culture because of the Cultural Revolution, so they were very supportive of me doing so.’ For Xixi, as for many modern Chinese people, TCM represents a cultural continuity that powerfully resonates for those looking for the past. That ability to survive is one of the major reasons for the popularity of TCM today. Virtually every other aspect of traditional Chinese culture has been shattered, sometimes beyond repair. An entire generation or more was lost, and that hollowness, the sense of something ripped out, still echoes through contemporary China. Despite its massive economic growth, China is still a deeply uncertain country, especially when it comes to its place in the world. Belief in TCM is a comforting national myth. The West might have invented modern medicine, but China has something just as good! Such pride can blend into pure ethno-nationalism: I have twice been told that ‘The reason Westerners don’t believe in TCM is that it only works on Chinese bodies.’ TCM’s claims of being ‘natural’ are also highly appealing in country where everything from dumplings to baby milk to river water can be toxic. Talking to an acupuncture student, I suggested that science could identify the chemicals in herbal medicines. ‘Herbs don’t have chemicals!’ she protested sharply. ‘Chemicals are from factories!’ Workers prepare Chinese medicinal herbs. Photo by Natalie Behring/PanosThere are more practical reasons for the popularity of TCM. China, which once had an equitable, if backward, health care system, was ranked 144th in the world for public access to health care, according to a report by the World Health Organisation in 2000. While TCM can be expensive, it’s considerably cheaper than conventional treatment, especially if surgery or scans are involved. For the poor, TCM or folk medicine can offer hope where conventional medicine closes its doors. A pot of herbal medicine boiled on the stove might not cure a leukaemia victim or substitute for unaffordable dialysis, but it provides the small comfort of doing something. A fear of the modern has crept in, too: new mothers are told to avoid showering or watching TV The Chinese public also distrusts conventional doctors, and with good cause. For starters, the level of education and training in the conventional health care system is astonishingly low. Only about 15 per cent of ‘doctors’ in Chinese hospitals have an MD, another 20 to 25 per cent have MAs, leaving the vast majority with only bachelor’s degrees in fields related to medicine or biology. Since doctors are severely underpaid, bribery is common, as is over-prescription of both expensive treatments and costly, sometimes fake, drugs. Public anger shows itself in many ways, from online applause for patients who’ve killed doctors in disputes over payment, to the angry crowd that stormed and wrecked a hospital in Guang’an, Sichuan, in 2006, after it was said that a three-year-old boy who had swallowed pesticide was refused treatment because his grandfather didn’t have cash in hand. Unless you pay through the nose or pull strings, Chinese hospital treatment is a nightmare of bureaucracy, queues, and competition for the doctors’ attention. Some years back, I went to a midlevel Beijing hospital with food poisoning. Getting seen meant going to the main window, paying a fee, being given the name of a doctor on another floor, going to find him, paying a fee, talking to him for two minutes as other patients pushed between us clamouring to be seen, going to have my blood drawn by a nurse, paying another fee, going to the test centre to have it checked, paying a fee, going back to see the doctor carrying a vial of my own blood, pushing past other patients, and being prescribed some drugs and put on an IV drip for three hours in a hospital corridor on a hard plastic seat. For which I paid a fee. In contrast, going to a TCM doctor is much like going to an alternative medicine practitioner in the West. You spend half an hour or longer talking with a nice, kind, probably quite wise person about your health, your lifestyle, the stresses you’re under, and they give you some sensible advice about diet, looking after yourself, and perhaps a dose of spiritual guidance on top. Despite its institutional, cultural and popular backing, TCM is always under threat from the competitive edge of conventional medicine. There are outright charlatans who will prescribe TCM for cancer, but the TCM practitioners I talked to all said that for serious problems, with observable and immediate symptoms, they refer patients to conventional treatment. At the Beijing Hospital of TCM, the bulk of the treatments offered were conventional. TCM is strongest where conventional medicine is weak — chronic back pain, migraines, persistent fatigue: elusive conditions for which most doctors, short of an immediate cause, such as a tumour, will essentially throw up their hands and turn to general lifestyle and diet advice. But unlike TCM, evidence-based medicine advances both suddenly and surely. For decades, erectile dysfunction made up a significant proportion of the TCM market, both in China and overseas. But with Viagra’s entry into the Chinese market in the early 2000s, the use of TCM has shrunk rapidly. A 2005 study in Hong Kong found that a large percentage of the TCM users surveyed had switched to Viagra, even though they stuck with TCM for other everyday ailments. Alongside this, the price of seal penises, once one of the most valued remedies, has dropped dramatically. But some practices have surged in the past 30 years — and while many of them might have been sensible in the context of an agricultural, pre-modern society, they are actively harmful today, mixed as they are with a false sense of modernity and packaged as medical necessity. Take the insistence upon the ‘sitting month’, or ‘golden month’, a period of 41 days of bed rest for women post childbirth. In a rural society in which women did much of the field work, this was a precaution against infection, and a way to save women from being forced back into manual labour too soon. Parallel practices existed in western Europe, such as the Biblically derived idea of the ‘churching of women’, a blessing given to mothers 40 days after childbirth, which metamorphosed into the ‘lying-in’ or ‘confinement’ of the 19th century. By the 1940s, Western gynaecologists — newly aware of the dangers of thrombosis caused by immobility — abandoned confinement. Meanwhile, in modern China, the practice was elaborated: not only are there numerous taboos derived from TCM theory, such as that new mothers avoid cold water and raw foods, but the spectre of cancer is invoked to threaten non-believers. ‘My mother didn’t undergo confinement,’ a former colleague in her 30s told me tearily. ‘And that’s why she died of cancer so young, just 15 years later.’ A fear of the modern has crept in, too: new mothers are told to avoid even showering or watching TV. Another resurgent practice, one that we saw Zhang Mingjuan fall victim to at the start of this essay, is the giving of ‘TCM injections’, which offers the double placebo of supposed herbal benefits with the reassuring presence of the needle. This practice was heavily pushed in the 1980s when the government was keen to promote TCM. According to Fan Minsheng, a professor at the Shanghai University of TCM: ‘During that time, before they put those TCM injections to the market, they didn’t go through the testing processes that Western medicines were subject to.’ In 2012, TCM injections caused more than 170,000 cases of adverse drug reactions, according to figures from the Chinese authorities. Indeed, the most obvious harm done by TCM is the side effects — and the abysmal failure of both the industry and individual doctors to warn patients about them. It’s routinely claimed that TCM has either fewer side effects than ‘Western medicine’, or no side effects at all; the first is at best unproven, the second is an outright lie, but it is the one most often spouted by experienced doctors nonetheless. Sara Nash, an Israeli grandmother, recently spent a week undergoing TCM treatment for chronic back pain in Hong Kong. However, even she baulked when the doctor prescribed a range of herbal medicines insisting that not only were there no side effects, but there could be none. In reality, conventional hospitals often find themselves dealing with the side effects of TCM. ‘I personally see at least one person a week with side effects from TCM,’ one doctor working in a major Beijing university hospital told me. I have myself witnessed a friend’s bruised foot swell so grotesquely after he was given TCM that it looked like a special effect from Alien. As a result, he was laid up for two weeks in a conventional hospital. My colleague Kath Naday suffered partial paralysis of her throat and stomach after her diarrhoea was given the TCM treatment, which caused her unspeakable agony until she vomited up the drugs. Even worse damage can be done by the outright fraudsters who hang off the fringes of TCM. Hu Wanlin was in prison for manslaughter when he opened a medical practice in 1993. Upon his release in 1997, he set up hospitals in the Shaanxi and Henan provinces. His remedies, which contained lethal doses of sodium sulphate, were suspected of killing 146 people in the Zhongnanshan Hospital of Shaanxi alone, and in 1999 he was finally arrested. He is now serving a 15-year sentence for murder. Shamefully, if unsurprisingly, China’s mainland government has done far less about issuing alerts about dangerous or toxic TCM treatments than the authorities in Hong Kong and the UK. To pick a few examples from the past four years, Anshen Bunao Pian pills, used for treating insomnia, contain 55 times the Chinese mainland’s legal limit for mercury. Zheng Tian Wan, a popular migraine treatment, is packed with aconite, causing potentially fatal heart palpitations and kidney failure. More than 60 per cent of China’s TCM products are blocked from export, according to the World Federation of Chinese Medicine Societies, a government-approved industry group. Where can we go to buy animal parts?’ he asked me conspiratorially, ‘Tiger, eagle, snake? For medicine! For men’s health!’ Around 30 to 35 per cent of TCM drugs, according to UK and US studies, contain conventional medicines. One of the Beijing pharmacists I spoke to, a thoughtful middle-aged man, freely confirmed this. ‘The Western medicine is so that people get quick relief,’ he told me, ‘But then the Chinese medicine treats the long-term issues they might have.’ Yet he couldn’t tell me exactly what conventional products were contained in the TCM drugs he sold. The conventional ingredients are unlabelled, often in doses far in excess of the norm, or combined with substances that should be available only by prescription. Painkillers are most common, but TCM skin creams often contain powerful steroids that are harmful to children. And in an effort to recapture the market, modern TCM erectile dysfunction products have been found to contain four times the usual dose of Viagra’s competitor Cialis. The quest for magical ingredients in TCM has also taken a heavy toll on Asia’s wildlife. TCM institutions have officially discouraged the use of endangered animals, but the practice continues even among those who should know better. In 2003, I took a group of Chinese Buddhists to a conference on Buddhism and the environment in Ulaanbaatar, Mongolia. One of them took me aside the day they arrived. ‘Where can we go to buy animal parts?’ he asked me conspiratorially, ‘Tiger, eagle, snake? For medicine! For men’s health!’ The official answer to these problems is further ‘scientification’. Most new government money for TCM goes to ‘scientification institutes’, and hundreds of TCM trials are published every year. But as the anti-TCM campaigner Zhang Gongyao said: ‘The so-called scientification of TCM has been going on for 80 years now, and still has no positive results. Some researchers want the chance to get more money from the government, and scientification is a good target for that.’ Yet scientific, or even ‘scientificated’, treatments might not satisfy the same emotional or symbolic needs that TCM does. The active ingredient in bear bile, ursodeoxycholic acid, has long been identified, synthesised, and proven to be an effective treatment for breaking down gall stones. Yet hundreds of thousands of mainland customers still insist on buying costly bear bile products produced through painful extraction from the gall bladders of live bears. They are backed up by officials like the SATCM director Wang Guoqiang, who falsely claimed in 2012 there is ‘no substitute’ for live production. Besides, the magical association with the bear’s strength and the ‘natural’ production are far more significant to TCM users than the actual effects of the drug. Real proof would need a massive improvement in the rigour of lab work. I am, at best, an interested amateur when it comes to research methodology, but reading TCM trial reports first-hand makes me wince when I come across sentences such as: ‘We set the control group at half the size of the experimental group, because it would be unethical not to have given more people an effective treatment.’ In the TCM trials published on the mainland, negative results are vanishingly rare. A systematic review of TCM trials conducted in 2009 by the Cochrane Collaboration found that most trials suffered from poor or incomplete data, and expressed severe concerns about methodological flaws. In one review, the mean average of 7422 surveyed Chinese TCM trials on the Jadad Scale (a standard measurement of quality) was 1.03 out of a possible five, excluding the vast bulk of them from inclusion in clinical reviews. Another assessment, conducted entirely by Chinese researchers, found that only four per cent of some 3000 TCM trials surveyed used adequate methods of blinding and allocation concealment. Edzard Ernst, emeritus professor of complementary medicine at the University of Exeter, said that ‘the most fundamental problem is that TCM researchers use science not to test but to prove their assumptions. Strictly speaking, this amounts to an abuse of science. It introduces bias on all levels and to such a degree that it is often impossible to identify on the basis of the published research.’ I was told of one TCM case at a provincial university in China where a PhD candidate was instructed by her supervisor to test whether a particular remedy he was keen to promote impeded cancer in rats. When the rats proved as cancerous as ever, he forced her to fake the results. Poor methodology, aside, there’s a more fundamental, philosophical problem: if traditional Chinese treatments or medicines are proven to work, then they stop being TCM and simply become part of the corpus of global evidence-based medicine. And as Yu Hsien, a doctor writing (pseudonymously) in 1933, aptly noted: ‘The day Chinese medicine is scientificised is the day it becomes cosmopolitanised.’ For me, the vision of sifting the vast range of traditional Chinese treatments through the sieve of evidence — sorting placebo from non-placebo, discerning active components, becoming aware of side effects — seems like a heroic national project, one that would put China on the scientific map and benefit all of humanity. But the idea of applying rigorous evidence-based methods, ultimately eliminating the idea of a separate TCM itself, is unacceptable to institutionalised TCM. ‘Among the scientification researchers, most of them have been refusing to conform to the “Western norm of science” in their lab results, for it is thought to be “unsuitable” for TCM,’ Professor Zhang Gongyao wrote to me, frustrated. ‘The researchers of TCM have no interest in eliminating the placebo effect in their lab work.’ There are also claims that standard research methods simply aren’t applicable to TCM, because ‘the treatment must differ for each individual’, or because ‘a suitable placebo can’t be used’. This massively underestimates the ingenuity of evidence-based researchers in designing robust, reproducible lab tests. German researchers devised a ‘sham acupuncture’ needle in 2001, allowing a plausible placebo, while there have been numerous tests incorporating individualised herbal treatments. ‘There are many adaptations of the trial design which allow us to incorporate virtually all the needs of TCM,’ Edzard Ernst has noted. Other practitioners still have genuine philosophical objections to the idea that ‘Western metrics’ must be the only measure of medicine. But both in my wide reading and in some teeth-grindingly frustrating conversations, I have never heard or seen a plausible alternative heuristic proposed. The most common suggestion is that Chinese medicine is simply and purely ‘empirical’, that its efficacy can be judged from experience and practice. It all depends on the jingyan, the ‘experience’ of the doctor, individualised and localised, and passed down from master to favoured apprentice. It ascribes an almost magical intuition to the wisdom and skill of individual doctors, while ignoring the measurable realities of treatment. And yet, in turn, that experience shouldn’t be dismissed. Whatever the failings of TCM, the skill of individual doctors in dealing with and reassuring patients, if not always curing them, is often visible. When it comes to the lives and health of ordinary Chinese people, the individual experience of good TCM practitioners could be a valuable resource for doctors, both in spanning cultural bridges and in pointing up everyday factors and beliefs that might hinder or benefit treatment. But this, like the other good things within TCM, cannot be done as long as the pretence that TCM itself is a valid scientific theory continues. Chinese traditions can be wonderful. They can give everyone, not just the Chinese, ways of thinking about how we live and how we see our bodies, and about our relationships with the world and each other. Chinese medicine could be wonderful, too. It could draw upon a rich history of experimentation and curiosity, a broad pharmacopoeia, and a deep concern for the poor and vulnerable, all tempered by modern methods. Both could enrich humanity and be a source of valid national pride at the same time. But for that to happen, Chinese tradition and Chinese medicine alike have to be cut free from the carcass of TCM. | James Palmer | https://aeon.co//essays/traditional-chinese-medicine-needs-its-own-revolution | |
Childhood and adolescence | Is our culture quietly hostile to something deeply important – loving our children in a genuine and attentive way? | There’s a liminal moment, at the end of each day, when I pull my son’s door to and whisper goodnight. My daughter’s door is already part-closed, and I hope she is sleeping. As I descend the stairs, doing my best not to clump on the painted steps, a layer of awareness slips from my shoulders. By the time I’m at the bottom, I’m mummy no longer, I’m just myself. Some nights, the moment passes without notice. Friends are waiting between dinner courses, or emails blink. All too often, half-made sandwiches sit open on a wooden board. It’s a bittersweet moment: a hint of how I’ll feel when my children – already teenagers – really do leave me. I love them too much to want to be free of them. Instead, I hug the feeling to myself as I pass from the mother I am in the hall upstairs to the woman I am in the rooms below. I like to imagine that when my mother bundled my three sisters and me into bed, at the end of each day, that she could rest easy – at least until morning. Perhaps this is just a story I tell myself. However, I don’t recall her stealing into her study after our lights were out to start her ‘real work’ – as I usually do after settling my children. Admittedly, she didn’t have to work to support us financially but neither did most of her friends. That, after all, was what husbands were for. When I was a teenager, I made a vow never to become like my mother. I would never sacrifice myself to family in the way my mother seemed to have done. I would put my own work first – whatever that would be. I would be true to my creativity, my life’s purpose – and would never be swayed by how my teenage offspring might judge me. But I was wrong. Or perhaps just young, which isn’t quite the same thing. The year I turned 30, my mother visited me in London, where I’d lived for many years. One chilly morning, we walked across Hyde Park and had tea overlooking the Serpentine. Mid-conversation, my mother put down her cup and came straight to the point. ‘Helen,’ she said, with her familiar emphasis on the first syllable. ‘Helen, if you’re going to keep on pushing in your career in the way you’ve been doing, you’re clearly too selfish to even think about starting a family.’ My cup clattered on its saucer. I didn’t like what she’d said, but I heard it. Yet, as it turned out, I wasn’t too selfish to have children. If anything, my problem has been the opposite. My husband would complain that I haven’t been selfish enough. Like Odysseus strapped to the mast, trying not to hear the sirens’ song, I can’t not hear my children’s calls, even if they sing me off course. I didn’t become a traditional mother in order to follow in my mother’s footsteps. Yes, a traditional way of doing things chimes with my personality. But it went deeper than this. It was a matter of beliefs. When I had my first baby, I came up against my beliefs about how best to care for him. ‘The more slowly trees grow at first, the sounder they are at the core, and I think the same is true of human beings.’ The moment I read this comment, by Henry David Thoreau, it rang true. The more slowly my son grew up – I felt – the more he leant on me along the way; the less jarring the external demands on him were, the sturdier he’d end up on the inside. And with any luck, the more complete his independence from me would one day be. What credits my sisters and me in the eyes of the world, and to some extent our own, is the work we do on top of the families we raise From my son’s earliest days, I knew that I couldn’t give him religious certainty. And existential security was beyond me, too. Instead, from that time, and for the hundreds of weeks that followed, I gave him myself. And maybe, by the time he works out that that isn’t enough, he’ll be old enough to find his own sources of meaning and certainty. This has been my gamble. (So far, so good.) It’s clear that in my children’s minds I’m a traditional mother (though they would never use that term). This is no mistake. I’ve let them take possession of me over the years, encouraged it even. I’ve wanted them to take me for granted – just as my mother did with my sisters and me. Not because I want them to grow into petty tyrants, but because I’ve always felt that their development depends on it – that, by leaning on me, they’ll grow out of me all the quicker. Just as I did with my mother. ‘And what about fathers?’ I imagine a voice calls out from the back of the room. The Harvard professor of literature Susan Rubin Suleiman had, I think, the right response: ‘To know that a man is a father is generally less of an indication of how he lives his life, than it is for a mother.’ Mindful (and envious) of the exceptions, this is what I’m getting at. My husband is active in our children’s lives – no question. However, his involvement doesn’t extend to keeping track of appointments, organising school clothes, filling lunchboxes or returning library books. And it’s caring about the daily necessities – the circus of childhood – that is, for so many mothers, both fantastically demanding and weirdly rewarding. Thirty years ago, I left my mother at Tullamarine Airport in Melbourne. While I boarded a plane to London, she drove home to her now-empty nest in Adelaide. Having spent decades of her life barely sitting down, she suddenly had plenty of time in which to think. Although she’d never use these words, I’m pretty sure she believes that her four daughters’ quest for self-fulfillment is in hopeless conflict with family love. Might she be right? Is our culture quietly hostile to something deeply important – loving our children in a genuine and attentive way? More troubling, are we ourselves hostile to it? Do we really think that loving our children unconditionally is to spoil them? We might work a double shift, when it comes to housekeeping. I know my sisters and I do. But it’s the emotional double bind that’s the real agony. What, then, should I have done? On looking into my children’s eyes, should I have looked the other way and pushed on with my academic career (following the advice of my mentors, professors Isobel Armstrong, Steve Connor and Anthony Grayling)? Or was I right to take my children into my arms, and let the careers of others overtake mine? Some might say I lacked commitment – I didn’t lean in. Others that my mortgage wasn’t big enough. Still others that I’ve loved my children too much, that I’ve over-invested in my relationship to them. What I now realise is how hard it is to devote yourself to children and not to lose your way, at least for a while I am no retiring stay-at-home mother. I write and edit – most recently a lifestyle magazine in Tasmania, where I now live, that went spectacularly wrong. I review, give talks, and am involved in the world. But this paid work has fitted around my children’s needs. I am a traditional mother in the deep psychological sense of wanting what’s best for one’s children, no matter how inconvenient for oneself. An understanding that, when all goes well, is passed down from mother to child. I’ve longed to make my mark on the world, just as I’ve longed to be a good mother. Like my sisters, I’ve hankered for self-fulfillment and personal goals. Why else, having devoted myself to family for so long, would my ambitions still taunt me? On reaching 50 with a perfectly respectable career behind me, I sometimes feel that I’m on the back foot, my husband and peers having shot past. What credits my sisters and me in the eyes of the world, and to some extent our own, is the work we do on top of the families we raise. Every day, I pour as many hours into my family and housekeeping than into my writing and editing, yet I’m recognised only for what I do beyond the home. This might be no more than polite social shorthand, but it’s still a sign of our times. My children give me enormous pleasure and pride, a love so profound it escapes words. But my sense of identity and worth, and my inner buoyancy, stem from my work beyond them. If you called up my mother today, and asked her why her four daughters are always busy, she might sigh and reply, ‘Too busy to see me.’ However, the real answer lies elsewhere. My sisters and I are forever on the go because we’re determined to be more than ‘just’ mothers. We’re not content simply to put our children to bed at the end of each day and put our feet up until morning. We refuse to accept that love and ambition don’t go together: we’d sooner toe the party line that career and family are happy bedfellows than accept the awkward truth of how hard that is. Even if the price is to be forever on the go. Yes, we’ve sacrificed our free time. But at least, we tell ourselves, we haven’t sacrificed ourselves. No wonder my sisters and I find it hard to relax, given the push-me-pull-you nature of our desires. At once to be there for our families, yet also to get on in the world. Sadly, feminism – to which I was once fiercely loyal – hasn’t been much help. If anything, it’s compounded the conflict by giving me the go-ahead to do and to be anything. It’s a licence that often feels – late on Sunday night with a deadline looming – like yet another pressure to perform. This is why my sisters and I accept a double shift. Not because we’re domestic masochists in thrall to throwback gender norms. But because, if we’re to sustain a rich sense of ourselves, independent of family, we feel we have no choice. During my 20s I read all, and taught much, of Virginia Woolf’s work. Though I read her less now, I still find myself touched by To the Lighthouse (1927). In it, Woolf reflects on her upbringing, recalling family holidays by the sea. Reading this novel as a young woman, I assumed that it was about the passage of time – the way life happens to you, rather than the other way around. I assumed that Mrs Ramsay, the motherly central figure, was nostalgic. Mrs Ramsay harked back to a time when it was acceptable for a woman to credit her life through family, rather than her own life’s work. She didn’t even cook the beef dish that she served her family and guests, I thought waspishly. She just thanked her cook. However, now that I’m a Mrs Ramsay in my own family – sadly, minus the help – I find I respond to her very differently. These days, I admire Mrs Ramsay for being vitally present, even after her death. I see her maternal qualities seep into her every relationship – with her children, husband, house, garden and visitors. Above all, I see the way she holds everything together, and credits the lives of those around her as deeply valuable. Her gestures might be passing – the sock she knits might never be worn – and yet they build up into a kind of solidity that, now that I care for my own family as much as for myself, seems all too real. These days, I see my own mother – like nearly every mother I know – in a sympathetic light. I see her as noble and wonderful, no matter her failings. The good mother that I so railed against when young is now someone I aspire to be. She’s someone who conveys to her children that she’s on this earth for them alone, while yet holding firm to her own ambitions. She’s someone who feels privately convinced that her children will be living with her till kingdom come, while also knowing, in another part of her mind, that one day they’ll be off and away. And she’s someone who, though intimately acquainted with sacrifice, chooses not to dwell on it. But this is not an elegy for self-sacrificing mothers. For Mrs Ramsay’s story is a warning too. Losing yourself in a deep love for family is, well, just that. It’s to lose your way in the journey that we all make to be ourselves. What I now realise, and didn’t before, is how hard it is to devote yourself to children and not to lose your way, at least for a while. It’s finding your way back that, I now believe, is the crucial part. My realisation, unfashionable but true, is this. Because of the way I love my family, I enjoy my personal freedom more when I put them first. Until my children reach maturity, my first loyalty is to them. I have other callings, too – I’d have gone mad if I didn’t – however, looking after them is still my principal work. I’ll never put it on my CV, but it’s clear to me that this is what accounts for the holes in it. So why did I become a traditional mother, rather than the modern mother for which my feminist education – and nearly 20 years of working in publishing, higher education and psychotherapy in London – groomed me? Why did I risk being consumed by a role that might leave me high and dry, a cuttlefish at high tide? In part, I rather unexpectedly enjoyed being needed. Equally unexpectedly, I found being around my children very creative, far more than I’d been led to expect. Caring for them – loving them unreservedly and creating a way of life out of this love – has been a revelation to me. Least fashionably of all, I realised that my marriage might not survive if I didn’t bend, and that bending like a reed was far better than breaking something good. Family life has expressed a deep part of myself that was there, as a potential, well before I had children. Being at home with my children has given me an imaginative space in which to rethink every aspect of my life, in a way that the pressures of my previous life simply didn’t allow. Just as I had to get to know my children in every mood under the sun before I really understood them, so being around them has led me to know myself better. Yes, these past 16 years have marked a hiatus in my career. But they’ve also been a precious opportunity. I’m now much clearer about what I care about. I now know what I love enough to pursue. Perhaps, I say to myself, I had to let go of the old me before a new me – wiser, older and flawed – came out of the shadows. | Helen Hayward | https://aeon.co//essays/is-motherhood-always-about-self-sacrifice | |
Automation and robotics | Hod Lipson’s artificial organisms have already escaped from the virtual realm. Now he wants to send them out of control | In a laboratory tucked away in a corner of the Cornell University campus, Hod Lipson’s robots are evolving. He has already produced a self-aware robot that is able to gather information about itself as it learns to walk. Like a Toy Story character, it sits in a cubby surrounded by other former laboratory stars. There’s a set of modular cubes, looking like a cross between children’s blocks and the model cartilage one might see at the orthopaedist’s – this particular contraption enjoyed the spotlight in 2005 as one of the world’s first self-replicating robots. And there are cubbies full of odd-shaped plastic sculptures, including some chess pieces that are products of the lab’s 3D printer. In 2006, Lipson’s Creative Machines Lab pioneered the Fab@home, a low-cost build-your-own 3D printer, available to anyone with internet access. For around $2,500 and some tech know-how, you could make a desktop machine and begin printing three-dimensional objects: an iPod case made of silicon, flowers from icing, a dolls’ house out of spray-cheese. Within a year, the Fab@home site had received 17 million hits and won a 2007 Breakthrough of the Year award from Popular Mechanics. But really, the printer was just a side project: it was a way to fabricate all the bits necessary for robotic self-replication. The robots and the 3D printer-pieces populating the cubbies are like fossils tracing the evolutionary history of a new kind of organism. ‘I want to evolve something that is life,’ Lipson told me, ‘out of plastic and wires and inanimate materials.’ Upon first meeting, Lipson comes off like a cross between Seth Rogen and Gene Wilder’s Young Frankenstein (minus the wild blond hair). He exudes a youthful kind of curiosity. You can’t miss his passionate desire to understand what makes life tick. And yet, as he seeks to create a self-assembling, self-aware machine that can walk right out of his laboratory, Lipson is aware of the risks. In the corner of his office is a box of new copies of Out of Control by Kevin Kelly. First published in 1994 when Kelly was executive editor of Wired magazine, the book contemplates the seemingly imminent merging of the biological and technological realms — ‘the born and the made’ — and the inevitable unpredictability of such an event. ‘When someone wants to do a PhD in this lab, I give them this book before they commit,’ Lipson told me. ‘As much as we are control freaks when it comes to engineering, where this is going toward is loss of control. The more we automate, the more we don’t know what’s going to come out of it.’ Lipson’s first foray into writing evolvable algorithms for building robots came in 1998, when he was working with Jordan Pollack, professor of computer science at Brandeis University in Massachusetts. As Lipson explained: We wrote a trivial 10-line algorithm, ran it on big gaming simulator which could put these parts together and test them, put it in a big computer and waited a week. In the beginning nothing happened. We got piles of junk. Then we got beautiful machines. Crazy shapes. Eventually a motor connected to a wire, which caused the motor to vibrate. Then a vibrating piece of junk moved infinitely better than any other… eventually we got machines that crawl. The evolutionary algorithm came up with a design, blueprints that worked for the robot.The computer-bound creature transferred from the virtual domain to our world by way of a 3D printer. And then it took its first steps. The story splashed across several dozen publications, from The New York Times to Time magazine. In November 2000, Scientific American ran the headline ‘Dawn of a New Species?’ Was this arrangement of rods and wires the machine-world’s equivalent of the primordial cell? Not quite: Lipson’s robot still couldn’t operate without human intervention. ‘We had to snap in the battery,’ he told me, ‘but it was the first time evolution produced physical robots. It was almost apocalyptic. Eventually, I want to print the wires, the batteries, everything. Then evolution will have so much freedom. Evolution will not be constrained.’ In the late 1940s, about five decades before Lipson’s first computer-evolved robot, physicists, math geniuses and pioneering computer scientists at the Institute for Advanced Study at Princeton University were putting the finishing touches to one of the world’s first universal digital computing machines — the MANIAC (‘Mathematical Analyzer, Numerical Integrator, and Computer’). The acronym was apt: one of the computer’s first tasks in 1952 was to advance the human potential for wild destruction by helping to develop the hydrogen bomb. But within that same machine, sharing run-time with calculations for annihilation, a new sort of numeric organism was taking shape. Like flu viruses, they multiplied, mutated, competed and entered into parasitic relationships. And they evolved, in seconds. These so-called symbioorganisms, self-reproducing entities represented in binary code, were the brainchild of the Norwegian-Italian virologist Nils Barricelli. He wanted to observe evolution in action and, in those pre-genomic days, MANIAC provided a rare opportunity to test and observe the evolutionary process. As the American historian of technology George Dyson writes in his book Turing’s Cathedral (2012), the new computer was effectively assigned two problems: ‘how to destroy life as we know it, and how to create life of unknown forms’. Barricelli ‘had to squeeze his numerical universe into existence between bomb calculations’, working in the wee hours of the night to capture the evolutionary history of his numeric organisms on stacks of punch cards. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead’ Just like DNA, Barricelli’s code could mutate. But he had some unusual ideas about how evolution worked. In addition to single-point mutations, he believed that evolution leapt forward through symbiotic and parasitic relationships between virus-like entities — otherwise it just wouldn’t be fast enough. Maybe, he thought, cells themselves first arose when virus-like creatures started slotting together, like Lego pieces. ‘According to the symbiogenesis theory,’ Barricelli wrote, ‘the evolution process which led to the formation of the cell was initiated by a symbiotic association between several virus-like organisms.’ So far, this doesn’t appear to be the way things happened; in fact, some researchers believe that viruses first emerged after cells. But a few of Barricelli’s findings were not too far off the mark. Once he had ‘inoculated’ MANIAC, it was minutes before the digital universe filled with numerical organisms that reproduced, had numerical sex, repaired ‘genetic’ damage and parasitised one another. When the population lacked environmental challenges or selection pressures, it stagnated. In other cases, a highly successful parasite would cause widespread devastation. These patterns of behaviour are typical of living things, from the simplest cells right up to human beings. The overall shape of his simulation matched life quite well, and is particularly reminiscent of viruses. Viruses are indeed parasitic: they are symbionts, which means that they need to take over the living cells of other organisms to reproduce; taken by themselves, they aren’t much more than simple DNA or RNA mechanisms surrounded by a coat of protein. And like all living things, viruses inevitably mutate during replication. But they also engage in some genetic give and take. As they weave in and out of host cells, they might steal host genes or leave their own genes behind (by some estimates, eight per cent of our human genome comes to us by way of viruses). Some even swap gene segments with other viruses, and that speeds things up quite a bit. When an influenza virus evolves through simple mutation and selection, we call that antigenic drift. Each fall, those of us who submit to annual flu vaccines do so in large part because of drift. But every once in a while, an influenza A virus makes an evolutionary leap — swapping a large genome segment with a very different strain and undergoing what is called an antigenic shift. The flu viruses we fear the most — the novel, pandemic strains — are often the products of such shifts. The newly emergent H7N9 avian flu virus is believed to have undergone an antigenic shift, enabling it to infect humans; to date, it has infected 132 and killed 39 in China. To pick a more explosive example, the Asian flu outbreak of 1957, another product of antigenic shift, wiped out between one and four million people worldwide. Evolvable computer programs also swap code as they engage in genderless algorithmic sex. As with viruses, the ability to make these exchanges boosts a program’s evolvability. And yet, as close to the real thing as Barricelli’s digital organisms came, they were just numeric code: they had a genotype but no phenotype, no bodily characteristics for evolution to sift through. Life on Earth is about tools that solve problems — a beak capable of cracking a tough nut, the ability to digest milk, a robotic leg that can take a step in the right direction. Natural selection acts on the hardware; the software, be it DNA or numeric code, just keeps score. Barricelli’s creatures might have behaved like living organisms, but they never escaped the computer. They never got the chance to take on the outside world. Not many people would call creatures bred of plastic, wires and metal beautiful. Yet to see them toddle deliberately across the laboratory floor, or bend and snap (think Legally Blonde) as they pick up blocks and build replicas of themselves, brings to my biologist mind the beauty of evolution and animated life. Most striking are the pulsating ‘soft robots’ developed by a team of students and collaborators. Though they have yet to escape the confines of the computer, you can watch in real time as an animated Rubik’s Cube of ‘muscle’, ‘bone’ and ‘soft tissue’ evolves legs and trots exuberantly across the screen. The more like us our machines become, the more dangerous and unnerving they seem One could imagine Lipson’s electronic menagerie lining the shelves at Toys R Us, if not the CIA, but they have a deeper purpose. Like Barricelli, Lipson hopes to illuminate evolution itself. Just recently, his team provided some insight into modularity — the curious phenomenon whereby biological systems are composed of discrete functional units, such that, for example, mammalian brain networks are compartmentalised. This characteristic is known to enable rapid adaptation in DNA-based life. ‘We figured out what was the evolutionary pressure that causes things to become modular,’ Lipson told me. ‘It’s very difficult to verify in biology. Biologists often say: “We don’t believe this computer stuff. Unless you can prove it with real biological stuff, it’s just castles in the air”.’ Though inherently newsworthy, the fruits of the Creative Machines Lab are just small steps along the road towards new life. Barricelli always skirted the question of whether his own organisms were alive, insisting that they could not be defined as one thing or the other until there was a ‘clear-cut’ definition of life. Lipson, however, maintains that some of his robots are alive in a rudimentary sense. ‘There is nothing more black or white than alive or dead,’ he said, ‘but beneath the surface it’s not simple. There is a lot of grey area in between.’ How you define life depends on whom you read, but there is a scientific consensus on a few basic criteria. Living things engage in metabolic activity. They are self-contained, in the sense that they can keep their own genetic material separate from their neighbours’. They reproduce. They have a capacity to adapt or evolve. Their characteristics are specified in code and that code is heritable. The robots of the Creative Machines Lab might fulfil many criteria for life, but they are not completely autonomous — not yet. They still require human handouts for replication and power. These, though, are just stumbling blocks, conditions that could be resolved some day soon — perhaps by way of a 3D printer, a ready supply of raw materials, and a human hand to flip the switch just the once. Then it will be up to the philosophers to determine whether or not to grant robots birth certificates. I’ve been relating some of these developments to friends, and once they get over the ‘cool’ factor, they tend to become distressed. ‘Why would anyone want to do that?’ they ask. We have no real experience with new life forms, particularly of the cyber type, though they abound in books and on screen. Consider Arthur C Clarke’s murderous computer HAL, or Battlestar Galactica’s Cylon babes gone wild — computers built to serve, which evolved to destroy their creators. The more like us our machines become, the more dangerous and unnerving they seem. But perhaps it is not the creation of new life that we fear, so much as the potential for unpredictable emergent behaviour. Evolution certainly offers that. Take viruses: like Lipson’s machines, these organisms exist in the grey area between life and non-life, yet they are among the most rapidly evolving entities on the planet. They are also some of the most destructive; the Spanish Flu of 1918 killed around 50 million people, and some scientists fear that the emergence of some kind of Armageddon virus is only a matter of time. From this point of view, it doesn’t matter whether viruses are alive or dead. All that matters is that they are highly evolvable and unpredictable. And here’s where things do get scary. If viruses can evolve within hours, computer code can do it within fractions of a second. Viruses are dumb; computers have processors that might some day surpass our own brains — some would say they already have. If we are going to take the risk of giving machines, in Lipson’s words, ‘so much freedom’, we need a good reason to do it. In Out of Control, Kelly proposes one possible reason. Perhaps, he says, the world has become such a complicated place that we have no other choice but to enable the marriage between the biologic and the technologic; without it, the problems we face are too difficult for our human brains to solve. Kelly proposes a kind of Faustian pact: ‘The world of the made, will soon be like the world of the born: autonomous, adaptable and creative but, consequently, out of our control. I think that’s a great bargain.’ According to Lipson, an evolvable system is ‘the ultimate artificial intelligence, the most hands-off AI there is, which means a double edge. It’s powerful. All you feed it is power and computing power. It’s both scary and promising.’ More than 60 years ago, MANIAC was created to ‘solve the unsolvable’. What if the solution to some of our present problems requires the evolution of artificial intelligence beyond anything we can design ourselves? Could an evolvable program help to predict the emergence of new flu viruses? Or the effects of climate change? Could it create more efficient machines? And once a truly autonomous, evolvable robot emerges, how long before its descendants (assuming they think favourably of us) make a pilgrimage to Lipson’s lab, where their ancestor first emerged from a primordial soup of wires and plastic to take its first steps on Earth? | Emily Monosson | https://aeon.co//essays/can-life-evolve-from-wires-and-plastic | |
Anthropology | They predicted the Boxing Day tsunami, but can Thailand’s sea gypsies stay afloat on the waves of modernity? | Back when it was still known as Burma, Moo Hning’s family fled by sea from the civil war in Myanmar. On rafts and without papers, they sailed from one island to the next, fishing with tackle, scavenging from rock pools and the seabed, and gradually learning which of the sea’s creatures they could safely eat. Over time, the water came to provide a home, and Moo Hning’s family, originally of the Moken people, rejoined their ancestors in this south-east Asian community known romantically as ‘sea gipsies’. Moo Hning and I were sitting on the deck of a schooner close to the naval border with Myanmar, having dropped anchor in Thai waters of the Andaman Sea. Now in her 60s, she turned back from looking at the Surin archipelago and said with a smile to the interpreter: ‘My life has been very difficult.’ The coast was the kind of place you could imagine Colonel Kurtz getting lost. A village of some 50 stilt huts, fashioned from grey wood, stood on a thin, white beach ready to be swallowed by dense rainforest, the undergrowth giving out a sweet, acrid odour as it blew warm across the waters. The two-stroke engines of long-tail boats — known in Moken as kabang — chopped like hatchets at the sea, as fishermen and divers left the island for the day. I had come here to record the efforts of a documentary team. Their premise – well-intentioned enough – was that people in the industrialised world should learn timeless lessons from indigenous cultures. Moo Hning had sailed with us from Phuket, the largest Thai island, where her family had eventually settled, to help us embed with the Moken on Surin. Though it was the home of her ancestors, she had never actually visited the island before. On the deck of the boat, she told us stories: about how her family had made clothes from plastic fertiliser sacks they found drifting on the water, how old engine oil containers had been their best cooking pots, and how a newborn baby would be submerged in the sea to wash it clean. She laughed with each memory, and then, as the crew prepared to go ashore, she pressed her wrinkled hands to her chest and started to cry. She thanked the boat’s skipper, the head of the documentary team, and then the rest of the crew individually. ‘Nobody has ever done anything to help us before,’ she explained. ‘I dreamt I would come here one day, but I didn’t know how I would find the money.’ She touched her head. ‘I dreamt I would come here on a boat.’ This didn’t seem like such an impressive feat of prophecy to me – how else would she have got to the islands? But I had to admit that Moo Hning’s route back to her ancestral home was true to the character of Moken lore. This, after all, was a people that had slipped through the gaps in the states formed long after their own tribes. Compelled to take chances in life, she had trusted in the good spirits to which the Moken pay tribute, in a belief system that welds animism to Buddhism and, in Malay and Indonesian waters further south, to Islam. On the Surin islands, she soon found half a dozen direct family members, all of whom she knew well already, and who in turn knew all about her life on the mainland. For the five days we stayed on the island, we were invited daily to eat at Moo Hning’s table. Each time we came, her relatives thanked us — as if unaware of the self-interest that had been part of the arrangement — for bringing her back to them. Just as fishermen in the Indian Ocean reported dolphins swimming out to sea, and elephants hurried from the coast, the Moken made for higher ground Moo Hning’s case was not unique. Despite their dislocation, all Moken seemed to know exactly what each of their relatives was doing, across three countries and hundreds of nautical miles. They communicated their news without modern technology, save for occasional photographs, sending word with the fishing boats that travelled back and forth between islands and coastal towns. On Surin I watched the lines of code for this indigenous internet being written: bare-breasted women, teeth stained red by betel, sitting in the shade beneath their huts and talking all day long. This oral tradition is what brought the Moken most of their small renown in the wider world. Immediately before the Boxing Day tsunami of 2004, the Moken noticed a sudden and dramatic decline in the tide. Traditional stories had told of a laboon, a ‘wave that consumes people’, that was to be expected when the sea started to retreat in such a way. Through their stories, the Moken had preserved the ability accurately to predict a tsunami. Just as fishermen in the Indian Ocean reported dolphins swimming out to sea, and elephants were said to have hurried from the coast, likewise the Moken of Surin made for higher ground: there was only one fatality in a village of 200 inhabitants when the wave destroyed their entire beachside village. I heard how, every week for months on end, the sea kept delivering another news team that wanted to meet the people who had survived the tsunami. One evening, over the hum of a generator, I sat talking to an old fisherman. With a cocksure smile he told me why the Thais liked having Moken on their fishing boats. ‘The Thais look, but they don’t see,’ he said. Bare-chested, hugging his leg, he told me some of what he had learnt: how a fish will not bite anything on a cold hook, and how different fish live in the cold channels of water that run through the Andaman. With a gleam in his eye, he talked of fissures in the seabed from which fresh water issues and can be drunk. Now and then he laughed, still amazed at the sea’s secrets. Before departing for the Andaman, I had read about research by Anna Gislén of Lund University in Sweden that found Moken children to have significantly better underwater vision than their European peers. One afternoon, as the documentary team gathered underwater footage, I snorkelled behind, watching a Moken man swimming effortlessly, minute after minute, without breaking the surface to draw breath. I climbed back into the boat and watched the last of the shoot. The swimmer’s stroke was unorthodox but powerful, a froglike crawl that easily outpaced a cameraman in fins. I looked at this man, working in his ancestral habitat, and thought about the oil rights, industrial fishing and border claims that were being contested in the waters around him. I had heard how Moken are often stopped and fined as they travel the Andaman Sea in their boats, assumed by border patrols to be immigrants rather than simply nomadic. On the overnight voyage to Surin, I had seen a giant pair of trawlers, between them towing nets up to half a kilometre in length, emptying the sea of fish. Sitting on the boat that afternoon, I wondered whether the modern world could still find a place for a people who lived so close to nature, who appeared not only culturally but biologically adapted to their habitat. As time passed, I realised I had come to Surin with a vicarious anger at modernity, generated on the Moken’s behalf. But the people themselves confounded me. Their stories were hard to hear, their survival always in jeopardy, yet the sense of a grave injustice was mine alone. Things that, to my mind, demanded a political response were to the Moken simply evidence of bad spirits, to be set aboard a ceremonial kabang and floated out to sea in a cloud of incense. The Moken’s language has few temporal markers; their culture, shaped by the daily struggle for survival, seems to have little concern with time. It seems, perhaps, ill-equipped to deal with the slow changes heading their way. A fisherman named Dunung told me he was thought to be 50 years old, but of course he couldn’t be sure. Then he laughed: ‘If I was 50, I would be wiser.’ Rightly or wrongly, the Moken seemed to define their own history in terms of near-constant struggle According to Jacques Ivanoff, the French anthropologist who is an authority on the Moken, the Thai government’s sole policy on preserving the Moken way of life is to corral them inside national parks, effectively turning a traditional community into a tourist commodity. The Moken of Surin told me how they were permitted to fish only for sustenance. While trawlers scoured the seas, the nomads were forbidden to catch even a small surplus to trade for rice, clothing or medicine. They had, in short, lost their autonomy. Those Moken who left their island homes for the economic opportunities of the Thai mainland, meanwhile, tended to forget their native language, while property developers forced them out of their coastal settlements and the alien structure of a monetary economy plunged them into poverty. These people had faced dangers greater than life inside a national park, of course. Rightly or wrongly, the Moken seemed to define their own history in terms of near-constant struggle. Their culture derived a part of its very meaning from hardship. Still, I wondered what would happen if those hardships became too great to endure. One day, perhaps, there would come a problem that stopped the stories rather than becoming a subject of them. Moken people on Surin told me that the islands were their paradise. They would happily live and die on them. I asked Moo Hning, who had wept with joy on reaching Surin, if she looked forward to returning to Phuket, and her town of roads and cars and brick houses. ‘I am so happy to see Surin,’ she said, nodding back across the seas, ‘but my life is not here’. Behind the village I walked through forest gardens. A charcoal pit was smoking as Moken prepared to cook. Around me, chilli bushes dripped with green peppers, bleeding orange towards a bright red tip, and above me were trees of coconuts, papaya and banana. Sweet potatoes were growing in the earth. An interpreter had saved seeds from a pumpkin that the film crew cooked one evening, and in an envelope she handed them to a young Moken girl, assuring her they would grow well here in the soil of Surin. The Moken augmented their subsistence fishing with food aid from the Thai authorities, and the people recognised their good fortune. The ageing Thai king Bhumibol Adulyadej had made their protection a point of his rule (though our own Thai crew seemed convinced that he had already died in secret). Again it was I, not the Moken, who worried at this reliance on a decrepit monarch and his unpredictable heirs. We left Surin in a deluge of rain, water falling thick from a black sky and bouncing back up from the sea. Moo Hning carried a banana plant, a gift from her family, as we waded through the shallows, pushing our dinghy past rocks and towards deeper water. Back on deck, to the sound of a mechanical rattle, a chain began to clink, winding around a capstan as the anchor was hauled up. While we prepared to leave for Phuket, my early conviction that the Moken way of life should be saved from the modern world had started to falter. Who would do the saving, and for whose benefit? Even if our desire to conserve was understandable, how did it account for Moken such as Moo Hning, who eventually chose the opportunities of the mainland? I wondered whether we expected the Moken, and indigenous peoples like them, to act as custodians for a spirituality the rest of our world had long since left behind. Perhaps we looked to them to provide us with a purity, a sense of the human that we still cherished, but no longer wanted for ourselves. | Julian Sayarer | https://aeon.co//essays/do-thailand-s-sea-gypsies-need-saving-from-our-way-of-life | |
Religion | I have turned away from the church but, up on Mount Athos, I turned on to the mysteries of Orthodox meditation | I travelled on the night train from Athens and arrived in Thessaloniki six hours later, in a sleepless daze. The small neon-lit café by the bus stop was crammed with plastic tables and chairs. I must have fallen asleep while I waited there, as I found myself being shaken back to consciousness just as the bus for Ouranoupoli — the ‘Sky City’ — was about to depart. The sky was still dark and I scrambled for a seat among young men with scruffy jumpers and goat-beards, priests and monks bearing the cross of their order on their chests, as well as assorted fellow-pilgrims, all quiet and half-asleep. The engine roared; the bus moved. I shut my eyes. At a little over 2,000 metres (6,660 ft), Athos is one of the tallest mountains in Greece. Sometimes called the ‘Christian Tibet’, it rises from a finger-like peninsula that connects with the rest of the country by a narrow isthmus. For the past 1,200 years a community of Orthodox monks has inhabited the fortresslike monasteries that dot its verdant slopes, in virtual isolation from the rest of the world. Spiritual seclusion has been a tradition here since Hellenistic times, but the monks didn’t arrive until the eighth century AD, when Christian ascetics from Asia Minor came to settle. In 885 Emperor Basil I dedicated Athos exclusively to worship by holy men and forbade settlement by anyone else, including women, holy or otherwise. By then, Orthodoxy was the dominant dogma of the Balkans and most of eastern Europe. Athos became a magnet for neophyte Christians: Bulgarians were the first to settle after the Greeks in 864. Russians and Georgians came in the 11th century, the Serbs in the 12th, the Romanians in the 14th. Today, it is a place of religious enlightenment where miracles are part of everyday life. Many pilgrims swear to having seen levitating ascetics and heard the voices of saints. I wasn’t swayed by tall stories. Although raised as an Orthodox Christian, I stopped sharing its central beliefs a long time ago. But the Christian ideals and values of my childhood have stayed with me, and I wanted to understand the more esoteric aspects of Eastern Christianity. Curiously, the Orthodox tradition of ‘mental prayer’, known as hesychasm (from the Greek word hesychia, meaning ‘stillness of mind’), has much in common with oriental meditation: a sitting posture, rhythmic breathing, the recitation of a specific mantra, the guidance of a guru, stillness of mind, and the ultimate goal of union with the divine. At the port of Ouranoupoli, the pilgrims formed a long queue to receive their visitors’ passes. These are issued by the monasteries, which are strict about whom they let in. Athos is a monastic republic ruled by a council of abbots. Ever since Greece annexed Athos from the Ottomans after the First World War, the Greek state has maintained a police garrison in Karyes, the village that serves as the ‘capital’ of Athos, together with a prefect who acts as liaison between the central government and the Holy Community. For all other intents and purposes, supreme authority over Athos resides with the Ecumenical Patriarch of the Greek Orthodox Church, in Constantinople, in Turkey. There are a couple of shops in Ouranoupoli where you can buy supplies: chocolate, cigarettes and tins of sardines. Some of the pilgrims hurried to fill their rucksacks as if about to depart for the Arctic. Others just sat and waited for the approaching ferry that put-putted in the distance. The ninth-century imperial edict had already filtered the sexes: at the wooden pier there stood only men. A huge yellow board warned in five languages that no women should board the boat or visit the monasteries. The last female we would see for the duration of our stay was the old lady at the kiosk that sold sardines. The ferry sailed past the monasteries along the western coast. The domed cupolas of churches and the intricate, hivelike arrangements of the monks’ living quarters appeared in glimpses between steep walls. For centuries, pious rulers offered gifts and lands to Athos. In its heyday, the mountain was home to 30,000 monks and held enormous wealth and influence. Kings and emperors, including Stefan Nemanja, founder of the Nemanjić dynasty of medieval Serbia, relinquished power to end their lives as monks on this mountain. And then, in 1389, the glory days of Athos came to an end, following the Ottoman victory at the battle of Kosovo. The Ottomans paid lip-service to the autonomy of the monastic community but confiscated all of its lands beyond the peninsula. Russian tsars picked up some of the slack, but by the mid-20th century the monasteries were financially broke, decrepit and nearly empty. Praying monks are depicted climbing towards Jesus; those who reach the top enter the heavenly kingdom of light Redemption came, oddly enough, from the European Union. Following a number of fires, one which almost destroyed the Serbian monastery of Hilandar in 2004, money poured in to rebuild and preserve the unique architecture of the monasteries. The collapse of the Soviet Union saw the return of support of benefactors from Russia and other predominantly Eastern Orthodox nations. In contrast to the rapid secularisation of western Europe, Athos is enjoying a period of religious vigour, and renewed political significance. Vladimir Putin is a frequent visitor. After a brief stop to transfer into a smaller boat, I circumnavigated the peninsula, arriving at the Iviron Monastery on the eastern coast by late afternoon. The sun saturated the warm ochre of the walls and the lush green of the surrounding forest. A young monk greeted me at the entrance. He invited me in and offered me a glass of raki (a potent alcoholic drink similar to grappa), a loukoumi (Turkish delight) and a cup of strong, sweet coffee. I registered my name and presented a letter from an Athenian bishop who had once practised in the Iviron monastery when he was young: my credentials for requesting an audience with an expert in Orthodox mental prayer. In the early 14th century, a Greek theologian named Gregory Palamas produced a synthesis of Orthodox philosophy that has defined the theology of the Eastern church ever since. He founded the contemporary tradition of hesychasm, which focuses on achieving experiential knowledge of God. Palamas believed that human beings could never understand the essence of God by employing reason alone. But humans could experience God’s actions (or ‘manifestations’ as he called them), through a retreat into inner prayer. While the Catholic tradition of mental prayer allows the faithful to use icons as aids and regards apparitions as signs from God, Orthodox mental prayer focuses on mental stillness and abstraction, deliberately keeping the mind free of images or thoughts. Palamas claimed that such prayer gradually builds an increasingly close relationship with God. One of the common themes in late Byzantine iconography is a ladder, representing hesychastic prayer, which connects the earth to the heavens. Praying monks are depicted climbing towards Jesus; those who reach the top enter the heavenly kingdom of light, experiencing love in its perfection. This is the moment of Orthodox theosis, of becoming one with God. The young monk took my letter to the abbot, then returned with a promise that I could meet the geron (the ‘guru’) after the evening Mass. I was ushered into the incense-scented chapel to wait. The sunlight descending through narrow portholes in the dome began to fade. Monks appeared like shadows, kneeling in front of the icons three times and making the sign of the cross, before retreating to join in the hymns and the liturgy. I spent a long night in the cell on Athos, drifting in and out of sleep several times. The young apprentice prayed without respite It was close to midnight before I met the geron. I was brought to his spartan cell, where he was guiding the prayer of a young apprentice. He welcomed me and pointed to a chair in the corner. I could sit there all I wanted, as long as I kept quiet. In the middle of the cell, the apprentice crouched in a low wooden stool and recited the words: ‘Lord Jesus Christ, Son of God, have mercy on me, a sinner,’ again and again. These are the words of the Jesus Prayer, which sits at the heart of the hesychastic tradition. Its first known mention is in the fifth century; when I asked, the monks assured me that the archangel Gabriel whispered it into the ears of the first monk who spoke it. The apprentice in the cell uttered the first four words while inhaling, and exhaled the rest. The geron sat quietly next to him, occasionally correcting the apprentice’s posture, making the sign of the cross, or repeating the words of the prayer like a conductor correcting the pitch of his one-man orchestra. Having seen similar practices in Buddhism, Hinduism and the Sufi tradition of Islam, I wondered if these traditions had a historical connection to Orthodox mental prayer. Can it be a coincidence that the Eastern Christians of Constantinople and the Sufi Seljuk Turks of Konya both introduced yogic practices into their worship sometime between the 12th and 13th centuries? By 1279, the Mongols had reconnected India with Europe. The Silk Road reopened and goods and ideas aplenty flowed though it; perhaps oriental meditation became integrated into European practices via this route. Or could the roots of Orthodox mental prayer be older still? Pythagoreans had been using meditation to escape the world of sensual attachment since the 6th century BC. Hindu traditions have similarly sought to penetrate the veil of illusions known as maya. The 14th-century ideas of Palamas built on the 3rd-century philosophy of Neo-Platonism, which united highly abstract spiritual concepts with monotheistic sensibilities, and concluded that this world was just a play of shadows on the wall of a Platonic cave. Taking this to its logical — or at least theological — conclusion, Palamas argued that to transcend the illusory world one had to block the senses, stilling the mind and journeying within. If there was something novel in Palamas’s mix, it was the Christian idea of universal love — agape — as the ultimate reward and the key to personal salvation. I spent a long night in the cell on Athos, drifting in and out of sleep several times as I sat, rather uncomfortably, in my little corner. The young apprentice prayed throughout the night without respite, breathing the words in and out while the old geron watched over him. Their spoken words echoed in my ears until they had lost their syntax and meaning. Practitioners of the mental prayer report a feeling of ecstasy at the climax of their experience. They describe their consciousness melting away into something ‘bigger’: the ‘love of God’. In a way, this union resembles the erotic; neuroscientists who studied similar practices claim that the deepest part of the brain, the limbic system, is similarly aroused both during an orgasm and an experience of religious ‘enlightenment’. Perhaps it is not coincidental that Jesus is described in the New Testament as the ‘groom’, and the Church as the ‘bride’. Just before dawn the wooden bell of the monastery’s chapel began sounding, a call to the monks to cease their prayers and join in the morning Mass. Desperate to get back out into the fresh air, I thanked my hosts and quickly excused myself. There was no way I could take in another Mass. Walking in the opposite direction, I climbed the steps to the highest spot on the wall, and looked out towards the sea. The sky was steadily lightening and a cool breeze carried the rejuvenating scent of salt towards me. Thyme, pine, jasmine and laurel emanated from the monastery’s garden. My face was warmed by the first rays of the sun. I knew that the light that kissed my face came from a star — a great ball of fire fuelled by thermonuclear reactions. That the laurels, the thyme, the bees and the chanting monks were the results of millions of years of biological evolution that had probably begun with a humble bacterium. That I smelled, saw and listened because of electrochemical signals that passed between neurons. That my limbic system was on overdrive. But despite my knowledge, a sense of mystery remained. Why was the sun so magnificent? Why was the sea so dear? Why did I feel a sense of profound meaning in everything? Perhaps my epiphany owed to nothing more than a second night of sleep deprivation. Nevertheless, I had arrived at a state of mind, possibly not dissimilar to that of the praying monks, where logic failed and gazing at beauty was all that mattered. If that was true then perhaps to know God was simply to look at a rising star, and feel inexplicably moved. | George Zarkadakis | https://aeon.co//essays/an-encounter-with-meditation-on-mount-athos | |
Economics | In today’s world, web developers have it all: money, perks, freedom, respect. But is there value in what we do? | There’s this great moment in the documentary Jiro Dreams of Sushi (2011) when the world’s most celebrated sushi chef turns to his son, who is leaving to start his own restaurant, and says: ‘You have no home to come back to.’ Which, when you think about it, isn’t harsh or discouraging but is in fact the very best thing you could say to someone setting out on an adventure. Last October I quit my job to become a freelance journalist. I had only ever made about $900 from writing, but my latest project, a profile of Douglas Hofstadter, had attracted interest from a couple of big American magazines. I stood to make anywhere between $10,000 and $20,000 from the piece. My plan was to sell that profile and keep writing others like it. This would be a rambling life of the mind. I would find a subject that I was intensely curious about and I’d live with it until I’d learnt everything there was to know. Then I would sit in a room somewhere and tap out a synthesis of such depth and piquant grace that no writer of non-fiction would think to touch my subject again — because I had nailed it, because I had put it to rest forever. My new life began on a Monday. I’m a late sleeper, but I read somewhere that writers do their best work in the mornings. So I woke up early, put on some coffee, and cracked open my laptop. When, in 1958, Ernest Hemingway was asked: ‘What would you consider the best intellectual training for the would-be writer?’, he responded: Let’s say that he should go out and hang himself because he finds that writing well is impossibly difficult. Then he should be cut down without mercy and forced by his own self to write as well as he can for the rest of his life. At least he will have the story of the hanging to commence with.Writing is a mentally difficult thing — it’s hard to know when something’s worth saying; it’s hard to be clear; it’s hard to arrange things in a way that will hold a reader’s attention; it’s hard to sound good; it’s even hard to know whether, when you change something, you’re making it better. It’s all so hard that it’s actually painful, the way a long run is painful. It’s a pain you dread but somehow enjoy. I worked on my Hofstadter piece until early Thursday afternoon. On Thursday night I got an unexpected email. It was a job offer, and these were the terms: $120,000 in salary, a $10,000 signing bonus, stock options, a free gym membership, excellent health and dental benefits, a new cellphone, and free lunch and dinner every weekday. My working day would start at about 11am. It would end whenever I liked, sometime in the early evening. The work would rarely strain me. I’d have a lot of autonomy and responsibility. My co-workers would be about my age, smart, and fun. I put my adventure on hold. In college I sort of aimlessly played. I read what I wanted and tinkered with my computer, I made little websites for my own amusement, I slept late and skipped class, and though sometimes I saw myself as an intellectual-at-large in the style of Will Hunting, I was basically just irresponsible. It’s only because of an exogenous miracle that, when I graduated in 2009 with a 2.9 GPA and entered a famously bad job market, I didn’t end up in privileged limbo — in Brooklyn, say, on my parents’ dime. In fact, I was among the most employable young men in the world. The exogenous miracle is that playing around with websites suddenly became a lucrative profession. ‘What about somebody in a coal mine — wouldn’t you say he works as hard as you? Why should you get paid so much more than that guy?’ I am a web developer, and there has never been a better time to do what I do. Here’s how crazy it is: I have a friend who decided, part way into his second year of law school, to start coding. Two months later he was enrolled in Hacker School in New York, and three months later he was working as an intern at a consultancy that helps build websites for start-ups. A month into that internship — we’re talking a total of six months here — he was promoted to a full-time position worth $85,000. I didn’t think finding good work would be this easy. I always figured I would end up like my sister. My sister set academic records in high school and studied at the University of Chicago but the only position matching her qualifications, when she came out of school, was a job translating airline menus. She had an especially bright and sensitive mind, but no technical specialty; and the market did what it’s wont to do to people like that. I remember one time at dinner she had asked my dad, who was something of a corporate bigshot: ‘You always talk about the value of hard work. But what about somebody in a coal mine — wouldn’t you say he works as hard as you? Why should you get paid so much more than that guy?’ I used to think that was an awfully naive question. In 1999 a dotcom with no revenue could burn $100 million in one year, with $2 million of that going to a Super Bowl ad. Its namesake website could offer a terrible user experience, and still the company could go public. Investors would chase the rising stock price, which would drive up the price further, which in turn drew more investors, feeding a textbook ‘speculative bubble’ that burst the moment everyone realised there wasn’t any there there. This kind of stuff isn’t happening any more. It’s not that the internet has become less important, or investors less ‘irrationally exuberant’ — it’s that start-ups have gotten cheaper. A web start-up today has almost no fixed capital costs. There’s no need to invest in broadband infrastructure, since it’s already there. There’s no need to buy TV ads to get market share, when you can grow organically via search (Google) and social networks (Facebook). ‘Cloud’ web servers, like nearly all other services a virtual company might need — such as credit-card processing, automated telephone support, mass email delivery — can be paid for on demand, at prices pegged to Moore’s Law. You can see why I’m in such good shape. In this particular gold rush the shovel is me Which means that these days the cost of finding out whether a start-up is actually going to succeed isn’t hundreds of millions of dollars — it’s hundreds of thousands of dollars. It’s the cost of a couple of laptops and the salary you pay the founders while they try stuff. A $100 million pool of venture capital, instead of seeding five or 10 start-ups, can now seed 1,000 small experiments, most of which will fail, one of which will become worth a billion dollars. And so there is a frenzy on. You can see why I’m in such good shape. In this particular gold rush the shovel is me. We web developers are the limiting reagent of every start-up experiment, we’re the sine qua non, because we’re the only ones who know how to reify app ideas as actual working software. In fact, we are so much the essence of these small companies that, in Silicon Valley, a start-up with no revenue is said to be worth exactly the number of developers it has on staff. The rule of thumb is that each one counts for $1 million. It helps that there aren’t enough of us to go around. I’m told by a friend at Bloomberg that they missed their quarterly tech hiring target in New York by 200 people. I get at least two enquiries a week from headhunters trying to lure me from my current job. If I say that I’m actively looking, I become a kind of local celebrity, my calendar fills with coffees and conversations, reverse-interviews where start-ups try to woo me. It’s as if the basic structure of this sector of the global economy has been designed for my benefit. Since developers are a start-up’s most important — if not their only — asset, start-ups compete by trying to be a better place for developers to work. Just a few weeks ago, an MTV2 camera crew came into my office to film an episode of a show called Jobs That Don’t Suck. Cash bonuses, raises, stock options and gifts are the norm. I once worked at a place that had a special email address where you’d send requests for free stuff — a $300 keyboard, a $900 chair, organic maple syrup. I have yet to take a job where there wasn’t beer readily at hand. Hours are flexible and time off is plentiful. Fuck-ups are quickly forgiven. Your concerns are given due regard. Your mind is prized. You are, in short, taken care of. You can imagine what it does to the ego, to be courted and called ‘indispensable’ and in general treated like you’re the one pretty girl for miles. When a lot of your contemporaries don’t even have jobs. When work, for most people, has a Damoclean instability to it, a mortal urgency. To be this highly employable is to feel liquid, easy, as if you can do no wrong. I know that I have a great job guaranteed in any major city. And it’s hard not to give a thing like that moral heft. It validates you is what I mean; it inflates your sense of your own character. I tell myself a story, sometimes, that while other people partied or read for pleasure, I was sitting in a room with my head down, fighting — that I worked hard to learn these minute technical things, and now I’m getting paid for it. Something prima donna-ish can happen when you start believing stories like that. I look at a lot of inbound résumés at my current job, and I throw away everybody who’s not a programmer. I do this enough times each day that a simple association has formed in my mind: if you’re not technical, you’re not valuable. We’re the ones with the magic powers. Every programmer knows that code looks cool, that eyes widen when we fill our screens with colourful incantations. ‘The programmer,’ the late Dutch computer scientist Edsger Dijkstra wrote in 1988, ‘has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.’ We like that idea. We like to think that because we can code, we have unprecedented leverage over the world. We decide what 15 million people will see when they follow a link. Our laptops literally get hot from the electric action we command. Nobody tells us we’re wrong for thinking this way. In fact, they reinforce the impulse. They congratulate us on being ahead of the curve. And when you consider my prospects without code and you consider my prospects with code, the lesson really does seem to be: join me! Try Codecademy in New York, go to Hacker School — pledge yourself, like Michael Bloomberg did in 2012, to learn to code. But that shouldn’t be the lesson. I was only 21 when I became the chief technical officer of an American corporation. When that happened, I thought of my dad because he, too, had once been among the country’s youngest corporate executives, a chief financial officer (CFO) by the time he was 28. The only difference is that the company he helped to run in his twenties was Hardee’s, a fast-food restaurant chain with more than 1,000 locations, while the company I helped to run was a web start-up. Just about all we did, in our three years of operation, was spend $350,000 of other people’s money. Dad’s company made hamburgers; mine ate them. I have a friend who’s a mechanical engineer. He used to build airplane engines for General Electric, and now he’s trying to develop a smarter pill bottle to improve compliance for AIDS and cancer patients. He works out of a start-up ‘incubator’, in an office space shared with dozens of web companies. He doesn’t have a lot of patience for them. ‘I’m fucking sick of it,’ he told me, ‘all they talk about is colours.’ Web start-up companies are like play-companies. They stand in relation to real companies the way those cute little make-believe baking stations stand in relation to kitchens. Take Doormates, a failed start-up founded in 2011 by two recent graduates from Columbia University whose mission was to allow users ‘to join or create private networks for buildings with access restricted to only building residents’. For that they, too, raised $350,000. You wonder whether anyone asked: ‘Do strangers living in the same building actually want to commune? Might this problem not be better solved by a plate of sandwiches?’ (The founders have since moved on to ‘Mommy Nearest’, an iPhone app that points out mom-friendly locations around New York.) A lot of the stuff going on just isn’t very ambitious. ‘The thing about the advertising model is that it gets people thinking small, lean,’ wrote Alexis Madrigal in an essay about start-ups in The Atlantic last year. ‘Get four college kids in a room, fuel them with pizza, and see what thing they can crank out that their friends might like. Yay! Great! But you know what? They keep tossing out products that look pretty much like what you’d get if you took a homogenous group of young guys in any other endeavour: Cheap, fun, and about as worldchanging as creating a new variation on beer pong.’ Groupon clones are popular, as are apps that help you find nearby bars and restaurants. There are dozens of dating apps with little twists — like Tinder, an iPhone app where you swipe to the right on a potential match’s picture if you like them, and to the left if you don’t; or Coffee Meets Bagel, which gives you one match per day for a low-stakes, let’s-just-grab-a-coffee date. SideTour, whose tech team is run by a former co-worker, lets you buy small ‘experiences’ around the city, like dinner with a monk. Just yesterday a developer friend of mine who’d recently gone out on his own shared his latest idea: an app that shows you nearby ATMs. The most successful start-ups, at least if you go by the numbers — $13.5 million to Snapchat, $30 million to Vine, $1 billion to Instagram (each of these windfalls indirectly underwriting 100 low-rent copycats) — seem to be the ones that offer teenagers new ways to share photos with each other. When I go to the supermarket I sometimes think of how much infrastructure and ingenuity has gone into converting the problem of finding my own food in the wild to the problem of walking around a room with a basket. So much intelligence and sweat has gone into getting this stuff into my hands. It’s my sustenance: other people’s work literally sustains me. And what do I do in return? We call ourselves web developers, software engineers, builders, entrepreneurs, innovators. We’re celebrated, we capture a lot of wealth and attention and talent. We’ve become a vortex on a par with Wall Street for precocious college grads. But we’re not making the self-driving car. We’re not making a smarter pill bottle. Most of what we’re doing, in fact, is putting boxes on a page. Users put words and pictures into one box; we store that stuff in a database; and then out it comes into another box. Web development is more like plumbing than any of us, perched in front of two slick monitors, would care to admit We fill our days with the humdrum upkeep of these boxes: we change the colours; we add a link to let you edit some text; we track how far you scroll down the page; we allow you to log in with your Twitter account; we improve search results; we fix a bug where uploading a picture would sometimes never finish. I do most of that work with a tool called Ruby on Rails. Ruby on Rails does for web developers what a toilet-installing robot would do for plumbers. (Web development is more like plumbing than any of us, perched in front of two slick monitors, would care to admit.) It makes tasks that used to take months take hours. And the important thing to understand is that I am merely a user of this thing. I didn’t make it. I just read the instruction manual. In fact, I’m especially coveted in the job market because I read the instruction manual particularly carefully. Because I’m assiduous and patient with instruction manuals in general. But that’s all there is to it. My friends and I who are building websites — we’re kids! We’re kids playing around with tools given to us by adults. In decreasing order of adultness, and leaving out an awful lot, I’m talking about things such as: the Von Neumann stored program computing architecture; the transistor; high-throughput fibre-optic cables; the Unix operating system; the sci-fi-ish cloud computing platform; the web browser; the iPhone; the open source movement; Ruby on Rails; the Stack Overflow Q&A site for programmers; on and on, all the way down to the code that my slightly-more-adult co-workers write for my benefit. This cascade of invention is a miracle. But as much as I want to thank the folks who did it all, I also want to warn them: When you make it this easy to write and distribute software, so easy that I can do it, you risk creating a fearsome babel of gimcrack entrepreneurship. Is there another dotcom bubble on? It’s hard to call it a ‘bubble’ when the Nasdaq’s not running wild, when no one’s going to lose their pension — when all anyone’s going to lose is, in fact, time: time pretending at enterprise; time ‘sharing’ and ‘liking’ in forums of no consequence; time tapping out pedestrian code, extracting easy money. The only rigorous way to think about value is in terms of dollars, in terms of prices arrived at by free exchange. Numbers like that are hard to dispute. If a price is ‘too low’ or ‘too high’, there’s said to be an opportunity for risk-free moneymaking. People tend to gobble up those opportunities. And so the prices of things tend to level out to just where they’re supposed to be, to just what the market will bear. Am I paid too much to code? Am I paid too little to write? No: in each case, I’m paid exactly what I should be. It’s like that question my sister asked dad at dinner. There’s an answer to that question — and this is the one I remember hearing that night — that says that my dad was probably paid more than the coal miner because the skills required to be CFO of a Fortune 500 company are scarcer, and more wanted, than the skills required to be a coal miner. It’s the combination of scarcity and wantedness that drives up a salary. And that answer seems fair, and fine, it seems to settle the question, but we’re not talking about pork belly futures, we’re talking about real people and what they do all day, and my sister, naive as she sounded, had a point, and that point is that the truly naive thing, the glib and facile thing, might be equating value with a market-clearing price. The price of a word is being bid to zero. That one magazine story I’ve been working on has been in production for a year and a half now, it’s been a huge part of my life, it’s soaked up so many after-hours, I’ve done complete rewrites for editors — I’ve done, and will continue to do, just about anything they say — and all for free. There’s no venture capital out there for this; there are no recruiters pursuing me; in writer-town I’m an absolute nothing, the average response time on the emails I send is, like, three and a half weeks. I could put the whole of my energy and talent into an article, everything I think and am, and still it could be worth zero dollars. And so despite my esteem for the high challenge of writing, for the reach of the writerly life, it’s not something anyone actually wants me to do. The American mind has made that very clear, it has said: ‘Be a specialised something — fill your head with the zeitgeist, with the technical — and we’ll write your ticket.’ I don’t have the courage to say no to that. I have failed so far to escape the sweep of this cheap and parochial thing, and it’s because I’m afraid. I am an awfully mediocre programmer — but, still, I have a secure future. More than that, I have a place at the table. In the mornings I wake up knowing that I make something people want. I know this because of all the money they give me. Correction, June 9, 2013: An earlier version of this essay misstated the terms of the job offer extended to the author in October, 2012. In the previous version, the author stated that he was offered a salary of $150,000. This has been changed to reflect that the offer was for $120,000. | James Somers | https://aeon.co//essays/dad-s-company-made-burgers-mine-just-eats-them | |
Neuroscience | Some people have neurological quirks that give them extraordinary perceptual powers. What can we learn from them? | Ordinary people with superior perceptual skills walk among us, absorbing information from the everyday world which is debarred to the rest of us. We can’t spot them, but they can pick up the faintest traces of smell or taste. They might see coloured auras that correspond to the expressed emotions of others. Some of them can even experience the pain or pleasure felt by other people. As one of these unlikely ‘superhumans’, Mary, a 53-year-old therapist, explains: ‘If I see pain inflicted, I feel pain myself. If I see gentleness in a touch of a hand, I get pleasure from the softness and love I can feel in that touch.’ In neurological circles, Mary is known as a mirror-touch synaesthete. She literally feels what other people feel. Psychological research I’ve conducted with colleagues at University College London and the University of Sussex indicates that one to two people out of a hundred experience mirror-touch sensations from childhood. We’ve noticed that, for such people, observing pain evokes the most intense experience. One of the mirror-touch synaesthetes we’ve worked with, whom I’ll call Alan, has to work hard to reassure himself that he’s not actually experiencing the things he feels. ‘When I see someone being touched, I have to consciously remind myself that I am not being touched myself,’ he says. ‘When I see pain, it’s the same, except the feeling is more intense; it draws my attention more [and] makes me think, “Oh, I am watching pain and it is not there.”’ Such abilities might seem like miraculous gifts, not unlike the supernatural powers given to the character Lydia in the US television series Heroes — the ability to feel the emotions, thoughts, hopes, and desires of others — or the extraordinary sensory powers bestowed on the streetwise teens in the British comedy-drama Misfits. But these abilities often require careful managing. Mary, for example, finds it impossible to see violence depicted on screen. ‘I hate it when my husband watches violent movies,’ she told us. ‘I cannot watch them, because I feel overloaded. This is obviously not a pleasant experience and it’s a downside to my synaesthesia.’ But the sensations are not always overwhelming. ‘The upside,’ said Mary, ‘is that I also experience the nice touches, the caresses and the hugs. None of the experiences last for long, and for that I am grateful.’ The mirrored feeling is experienced in exactly the same part of the body — a finger for a finger, an arm for an arm, an eye for an eye. Ironically, just as we might imagine what a sensory-enhanced life might be like for a mirror-touch synaesthete, they, too, often try to imagine what a life — seemingly benumbed — must be like for the rest of us. For Alan, ‘Living with mirror-touch is at its most interesting when I stop and observe it, and think how fascinating it is that other people don’t experience it.’ But when his condition fails to fascinate him, it can be burdensome: ‘It becomes a bit overwhelming at times, especially in crowded places.’ Terms such as ‘overwhelming’ or ‘fascinating’ crop up a good deal when we talk to mirror-touch synaesthetes about their everyday experiences. One man I interviewed reported feeling cold in his fingertips whenever I touched a glass filled with ice. While mirrored thermal sensations are rare, they do share with mirrored touch sensations the quality of anatomical specificity. The mirrored feeling is experienced in exactly the same part of the body as the person actually experiencing the cold, heat or pain feels it — a finger for a finger, an arm for an arm, an eye for an eye. For most, the mirrored-touch sensation directly mirrors what they see — observing someone touch the left side of the face evokes in them a sensation on the right side of the face. But for a few, the mirrored sensation is anatomically mapped — if they see someone touch the left side of their face they’ll feel it on the left side of their own face. So while some synaesthetes treat observed touch as though looking directly in the mirror, others rotate their perspective to that of the observed person. With the help of functional brain imaging, we have begun to understand why some individuals possess this particular ability. We asked a group of mirror-touch synaesthetes to watch videos of other people being touched, and gave the same task to a group of people without mirror-touch synaesthesia. When we compared the brain scans of the two groups, we learnt that anyone, synaesthete or not, recruits parts of the brain involved in experiencing touch themselves (the mirror-touch system). Our brains mirror observed experiences. In people with mirror-touch synaesthesia, this empathetic system is over-excitable, and can activate rapidly to reach a threshold that allows them to experience tactile sensations literally. But we still don’t understand the precise mechanisms leading to this pattern of brain activity. Experimental findings seem to suggest that we all show a greater tendency to mirror observed touch when the person experiencing the event is more similar to ourselves. And this raises the possibility that the networks involved in distinguishing representations of oneself from others act as a gate to levels of excitability in those brain regions involved in mirroring. It is possible that, in people who experience mirror-touch sensations, the levels of excitability of the neural networks governing the ability to distinguish oneself from others leads to a change in normal mirroring mechanisms. Simply put, the brain of an individual who experiences mirror-touch sensations effectively treats the body of another person as though it were her own. Mirror-touch synaesthetes might be viewed as society’s natural empathisers — people wired to excel at putting themselves in another person’s shoes. This can be a delight, or a burden. Or a peculiarly human, if amplified, mix of the two. In studies I’ve undertaken with Jamie Ward, professor of psychology at the University of Sussex, we’ve found that people who experience mirror-touch show heightened levels of emotional reactive empathy — that is, the ability to understand and share the affective states or feelings of others. Another study I’ve been involved in, published in the Journal of Neuroscience (2011), indicates that individuals with mirror-touch are significantly better than the rest of us at recognising the facial emotions of others, though not necessarily better at recognising who those people are. Mirror-touch synaesthetes outperform control subjects when tasked with naming the facial emotions of people photographed smiling, fretting, frowning, puzzling, gurning and so forth. We were able to rule out any suggestion that their better scores were the result of greater effort, or that they were better with faces generally, because when tested on their ability to name the people in the photographs, those with mirror-touch performed no better than those without. Super-recognisers will hide their memory of long-ago encounters to avoid discomfiting people who never even registered them. One of the ways we understand other peoples’ emotions is by putting ourselves in their place. To understand if someone is angry, we simulate what it is like to experience anger ourselves. If someone is sad, we simulate sadness. When these simulation mechanisms are over-excitable, as in mirror-touch synaesthesia, they can spill over and facilitate other abilities, such as emotion-recognition, which also use mirroring processes. In this sense, people with mirror-touch can tell us how much the degree to which we simulate the experiences of others can contribute to broader social-perception abilities, such as emotion-recognition and empathy. It is not just synaesthetes who possess apparent superpowers. ‘Supertasters’, for example, perceive stronger taste sensations from a variety of everyday substances, including alcohol, coffee and green tea. To supertasters, sugar tastes sweeter, the bitterness of, say, Brussels sprouts, is exaggerated, carbon dioxide bubbles in fizzy drinks are more pronounced, and there is more burn from oral irritants such as alcohol. On the whole, supertasting might be more of an ‘irritating power’ than a ‘superpower’. Indeed, some supertasters experience less enjoyment from food and drink and are therefore less likely to indulge, which might explain why female supertasters at least are thinner than non-tasters (people at the other end of the tasting spectrum). At root, supertasters have a greater number of fungiform papillae (the mushroom-shaped dots on the front of your tongue) and taste buds. There are no known complex neural pathways involved in this particular ability. But it’s a different matter with super-recognisers. These are a rare group of individuals who excel in the ability to remember faces. First reported in 2009 by researchers at Harvard University and Dartmouth College, these are people who really never forget a face. They can recognise people whom they might have seen only a few times in their lives or, as Brad Duchaine, one of the Dartmouth College research team, puts it, ‘an extra they saw in a movie years before’. Such people can identify casual staff that served them years earlier, a waitress at a motorway inn they passed through, a car-park attendant they once glimpsed, or a fellow department store shopper with whom they never interacted. The difficulties that this super-ability might cause in social settings are easy enough to imagine, and many super-recognisers will hide their memory of long-ago encounters to avoid discomfiting people who never even registered them. Work is ongoing to determine just how common super-recognisers are, but there is some evidence to suggest that they can put their skills to good use. For example, the Metropolitan Police Service in London used super-recognisers in their ranks to help identify individual rioters during the 2011 riots across the capital. So, some people can feel the sensations of others, some can pick up on the faintest emotions, and some can excel in their memory. What about the rest of us? Are these abilities simply out of our reach or are there ways in which we might enhance these faculties in ourselves? With supertasting, it would seem that biological factors stand in our way, but what about developing a superior memory, or the ability to excel in emotion sensitivity? This is an avenue that many labs are now starting to pursue — testing the extent to which we can improve perception and memory by using training and techniques that help us to modulate brain activity in order to aid performance. By studying people with superior psychological skills we can begin to unpack key processes that aid their abilities, processes which, in turn, could be used to help the rest of us become a bit more ‘superhuman’. | Michael Banissy | https://aeon.co//essays/neuro-quirks-and-super-human-perceptions | |
Ecology and environmental sciences | In places once thick with farms and cities, human dispossession and war has cleared the ground for nature to return | I stepped out into the sunlight, scarcely able to believe what I had seen or, rather, what I had not. I stared at the hills around me, contrasting them with the old photos of those same hills I had seen. Where dense forests now grew, forming a high, closed canopy — in the valleys, over the hills and up the mountain walls until they shrank, many thousands of feet above sea level, into a low scrub of pines, which diminished further to a natural treeline — there had been almost nothing. In the photos, taken on the western side of Slovenia during the First World War, the land was almost treeless. So tall and impressive are the trees now and so thickly do they now cover the hills that when you see the old photos — taken, in ecological terms, such a short time ago — it is almost impossible to believe that you are looking at the same place. I have become so used to seeing the progress of destruction that scanning those images felt like watching a film played backwards. Tomaž Hartmann had driven for almost an hour along a forest track through Kočevski Rog to bring us here. The woods of beech and silver fir towered over us, in places almost touching across the road. Their roots sprawled over mossy boulders. They rolled down into limestone sinkholes: karstic craters. Karst topography — weathered limestone landscapes of chasms and caves, sinkholes, shafts and pavements — is named after this region of Slovenia, which is sometimes called the Kras or Karst plateau. The word means barren land. When Karst landscapes are grazed, they are rapidly denuded, but it was hard to connect the term with what I now saw. Where the road clung to the edge of a hill, I could see for many miles across the Dinaric Mountains. The mountains rambled across the former Yugoslavia, fading into ever fainter susurrations of blue. The entire range was furred with forest. Where the road sank into a pass, the darkness closed around us. Through the trunks I could see the air thicken, shade upon shade of green. A few yards from the road, a fox sat watching us. Its copper fur glowed like a cinder in the shadows, which cooled to charcoal in the tips of its ears. It raised its black stockings and loped away into the depths. Woodpeckers swung along the track ahead of us. The leaves of the beeches glittered in the silver light above our heads. The great firs grazed the sun, straight as lances. They looked as if they had been there forever. ‘All this,’ Tomaž told us, ‘has grown since the 1930s.’ He parked the car and we set off up a forest trail. Mushrooms nosed through the leaf litter beside the path. Saffron milk caps, orange and sickly green, curled up at the edges like Japanese ceramics. Dryad’s saddle, sulphur tuft and cauliflower fungus accreted around rotting stumps. Russulas — scarlet, mauve and gold — brightened the forest floor. Tomaž led us up a limestone slope towards a stand of virgin forest, the ancient core of the great woods that had regenerated over the past century. As we climbed, we stepped into a ragged fringe of cloud. Sounds were muffled. The trees loomed darkly out of the fog. As we walked, Tomaž spoke about the dynamism of the forest system: how it never reached a point of stasis, but tumbled through a constant cycle of change. He had noticed some major shifts, and knew that, as the climate warmed, there would be plenty more. Though he described himself as both a forester and a conservationist, he had no wish to interrupt this cycle, or to seek to select and freeze a particular phase in the succession from one state to another. He sought only to protect the forests, as far as his job permitted, from destruction. Ahead of us something dark and compact shot across the path in a blur and disappeared into the undergrowth: probably a young wild boar, Tomaž said. Then, though it was not clear where the transition had occurred, we found ourselves in the primeval core of the forest. The trees we had walked past until then were impressive, but these were built on a different scale. The beeches grew, unbranched — smooth pillars wrapped in elephant skin — for 100 feet until they blossomed, like giant gardenias, into a leafy plateau in the forest canopy. Silver firs pushed past them, the biggest topping out at almost 150 feet high. Only where they had fallen could you appreciate the scale of their trunks. The forests blotted out memories of what had gone before. Humanity’s loss was nature’s gain The forest had entered a cycle Tomaž had not seen before, in which many of the giants had perished. Some had died where they stood, and remained upright, reamed with beetle and woodpecker holes, sprouting hoof fungus and razor strop. They looked as if a whisper of wind could blow them down. Others now stretched across the rocks and craters, sometimes blocking our path, sometimes suspended above our heads. Among the trunks lying on the ground, some were so thick that I could scarcely see over them. Where they had fallen, thickets of saplings crowded into the light. Seeing the profusion of fungus and insect life the dead wood harboured, I was reminded of the old ecologists’ aphorism: there is more life in dead trees than there is in living trees. The tidy-minded forestry so many nations practise deprives many species of their habitats. On a large rotten log that had lost its bark and was now furry with green algae, Tomaž showed us two sets of four white marks: deep parallel scratches where a bear had sharpened its claws. He told us that he had seen plenty of bears in the forest, but never a wolf or a lynx — though they are also abundant here. Just knowing that they were there enriched and electrified every moment he spent in the forest. I felt it too, like a third beat of the heart. The forest seemed to bristle with possibility. Here, to mangle W H Auden, nature’s jungle growths were unabated, her exorbitant monsters unabashed. But this great rewilding, Tomaž explained, had come at a price. It was the accidental result of a series of human tragedies. Some 150 years ago, just 30 per cent of the Kočevje region was covered by trees; now, 95 per cent of it is forested. Much of the forest was preserved by the princes of Auersperg as hunting estates. So obsessed by hunting were they, as princes often seem to be, that they and the other great lords of the Habsburg monarchy in Slovenia and Croatia drew up an official declaration of friendship with the bear, signed and stamped with their great seals, in which they agreed to sustain its numbers so that they could continue to pursue it. The role the bears played in this negotiation is unrecorded. The revolutions of 1848 brought feudalism to an end in central Europe. Local farmers lost their rights to graze common land, but acquired their own private plots. At around the same time, imports of cheap wool from New Zealand began undermining the European industry. By the end of the 19th century, many peasant farmers had sold their land and either moved to the cities or emigrated to America. The Depression of the 1930s further extended the woods — to around 50 per cent of Kočevje — as more people departed. But the greatest expansion of the forest took place as a result of what happened in the following decade. Most of the population of south-western Slovenia — around 33,000 people — was ethnic German. They kept sheep and goats in the hills and ran much of the trade in the towns. Under King Aleksandr’s autocracy in the ten years before the Second World War, the Germans of Yugoslavia, around half a million in total, suffered discrimination and exclusion. In response, many of them joined German nationalist movements, some of which allied themselves to the Nazis. By 1941, when Hitler’s army invaded Yugoslavia, more than 60 per cent of its ethnic Germans had joined an organisation called the Kulturbund, which became absorbed into Himmler’s euphemistically titled Volksdeutsche Mittelstelle, or Ethnic Germans’ Welfare Office. Hitler ceded south-western Slovenia to Italy, and the Nazis forcibly relocated many of the Yugoslav Germans to the Third Reich, to preserve their ‘ethnic purity’ and protect them from attacks by partisans. Some of the Germans of Kočevje were transferred to eastern Slovenia, some removed to other lands under German rule. So feracious is the vegetation of the Amazon that it would have obliterated all visible traces of the civilisations built by its people within a few years of their dissolution Almost a million people died in the Yugoslavian civil strife triggered by the Nazi invasion. Some of these great crimes were committed by the Prinz Eugen Division of the SS, among whose members were Yugoslavian ethnic Germans. They massacred Jews, partisans and Communists, as well as people believed to sympathise with them. After the Axis forces were routed, Tito’s communist government found it convenient to blame ethnic Germans for many of the horrors committed by other people. This was, it seems, easier than facing the truth: that atrocities were committed by Croats, Serbs, Bosnians, Albanians, Hungarians, Nazis, communists, monarchists, Orthodox Christians, Catholics and Muslims. Almost all the Yugoslavian Germans who did not flee the country with the Axis armies were either expelled by Tito’s government or interned, often in forced labour camps. Some were taken by the Soviet Union’s Red Army to camps in Ukraine. Within a few years of the end of the war in Yugoslavia, Slovenia’s ethnic German population had dropped by some 98 per cent. Many other collaborators were also killed. The six battalions of the Slovenian Home Guard fled with the retreating German troops to Austria in May 1945. They were forcibly repatriated by the British. Driving with Tomaž through the forests of Kočevski Rog, we had seen beside the road great trunks carved, like totem poles, into the tortured figures of Christian martyrs. They marked the sinkholes beside which some thousands of the collaborators were lined up and machine-gunned. The partisans then used explosives to make the craters collapse, burying the corpses. The barren lands of Kočevje, whose population had been relocated and dispersed first by the Nazis, then by the Red Army and the communist government, were never recolonised. When the farms were abandoned and the pastures no longer grazed by sheep and goats, the seeds that rained into them from the neighbouring woods were allowed to sprout once more. Thus the land has been repopulated by trees. Ancient Maya temple ruins in Uxmal, Yucatan, Mexico. Photo by Steve McCurry/MagnumIn the Americas — North, Meso and South — the first Europeans to arrive in the 15th and 16th centuries reported dense settlement and large-scale farming. Some of them were simply not believed. Spaniards such as the explorer Francisco de Orellana and the missionary Brother Gaspar de Carvajal, who travelled the length of the Amazon river in 1542, claimed that they had seen walled cities in which many thousands of people lived, raised highways and extensive farming along its banks. When later expeditions visited the river, they found no trace of them, just dense forest to the water’s edge and small scattered bands of hunter-gatherers. Orellana and Carvajal’s reports were dismissed as the ravings of fantasists, seeking to boost commercial interest in the lands they had explored. It was not until the late 20th century that investigations by archaeologists such as Anna Roosevelt at the University of Illinois at Chicago and Michael Heckenberger at the University of Florida suggested that Orellana and Carvajal’s accounts were probably accurate. In parts of the Americas previously believed to have been scarcely inhabited, Heckenberger and his colleagues found evidence of garden cities surrounded by major earthworks and wooden palisades, built on grids and transected by broad avenues. In some places they unearthed causeways, bridges and canals. The towns were connected to their satellite villages by road networks that were planned and extensive. These were advanced agricultural civilisations, maintaining fish farms as well as arable fields and orchards. As in Slovenia, what appeared to be primordial forest had grown over the traces of a vanished population. It appears that European diseases such as smallpox, measles, diphtheria, the common cold were brought to the Caribbean coast of South America by explorers and early colonists and then passed down indigenous trade routes into the heart of the continent, where they raged through densely peopled settlements before any Europeans reached them. So feracious is the vegetation of the Amazon that it would have obliterated all visible traces of the civilisations built by its people within a few years of their dissolution. The great várzea (floodplain) forests, whose monstrous trees inspired such wonder among 18th and 19th century expeditions, were probably not the primordial ecosystems the explorers imagined them to be. The short summers and long cold winters, the ice fairs on the Thames and the deep cold depicted by Pieter Breugel might have been caused partly as a result of the extermination of the Native Americans Gruesome events — some accidental, others deliberately genocidal — wiped out the great majority of the hemisphere’s people and the rich and remarkable societies that they’d created. In many parts of the Americas, the only humans who remained were — like the survivors in a post-holocaust novel — hunter-gatherers. Some belonged to tribes that had long practised that art, others were forced to re-acquire lost skills as a result of civilisational collapse. Imported disease made cities lethal: only dispersed populations had a chance of avoiding epidemics. Dispersal into small bands of hunter-gatherers made economic complexity impossible. The forests blotted out memories of what had gone before. Humanity’s loss was nature’s gain. The impacts of the American genocides might have been felt throughout the northern hemisphere. Dennis Bird and Richard Nevle, earth scientists at Stanford University, have speculated that the recovering forests drew so much carbon dioxide out of the atmosphere — about 10 parts per million — that they could have helped to trigger the cooling between the 16th and 19th centuries known as the Little Ice Age. The short summers and long cold winters, the ice fairs on the Thames and the deep cold depicted by Pieter Breugel might have been caused partly as a result of the extermination of the Native Americans. In the Soča valley, in north-western Slovenia, Jernej Stritih, a clever, laconic head of department in the Slovenian government, with a thick beard and a splendid moustache, whom we had befriended in Ljubljana, took us to a restaurant ran by a friend in the front room of his farmhouse. The proprietor owned a small herd of sheep, which were kept for show and to make cheese to sell to tourists. We had seen them on display that morning in the Trenta Fair, massive beasts weighed down by trailing yellow coats. They had won first prize, and now a large gilt cup stood on a table, glimmering in the low brown light, while our host, in a leather waistcoat and bushy side-whiskers, drank and talked with his friends. From time to time he would stop talking and, almost as if he were unaware that he was doing so, bend down to play the dulcimer on the table before him, while the other men continued their conversation. As we ate, Jernej explained that our host was one of the last shepherds in the region. Because there was no longer any arable production in the valley, the few remaining sheep could stay in the lowlands and were never led into the mountains. Here, by contrast to Kočevje, there had been no mass dispossession of local people. A different social tragedy had been engineered. In the 1950s, he told us, Marshall Tito had banned the goat. The ostensible purpose was to protect the environment, but doubtless he also sought to drag the peasantry out of what Karl Marx and Friedrich Engels called its ‘rural idiocy’ and press it into the urban proletariat. (The peasants of eastern Europe had perversely failed to fulfil the Communist Manifesto’s prediction that they would ‘decay and finally disappear in the face of modern industry’). Without goats, which browsed back the scrub, the pastures became unsuitable for sheep. The rewilding of the western side of Slovenia, the rapid regrowth of forests there and the recovery of its populations of bears, wolves, lynx, wild boar, ibex, martens, giant owls and other remarkable creatures, took place at the expense of its human population. This is not to suggest that it continues to generate social tragedy. On the contrary, this region has become a lucrative destination for high-end tourism, which supports what was, when we visited, a buoyant local economy. The forests and their wildlife, the mountains, repopulated by ibex and chamois, the caves with their endemic species of blind salamander, known to locals as the human fish on account of its smooth pink skin, the rivers with their steady flow and excellent whitewater rafting, the extraordinary beauty of this regenerated land, draw people from the rest of Slovenia and from all over Europe and beyond. Talking to many Slovenians, it became clear that the integrity of the natural environment was now a source of national pride. None of this, however, is to deny a disquieting truth. Slovenia is just one example of a global phenomenon: most of the rewilding that has taken place on Earth so far has happened as a result of humanitarian disasters. This is an adapted and reprinted extract from Feral: Searching for enchantment on the frontiers of rewilding (Penguin), by George Monbiot. Copyright © George Monbiot, 2013. | George Monbiot | https://aeon.co//essays/why-humanitarian-disasters-are-good-for-nature | |
History of ideas | Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality | Theoretical physics is beset by a paradox that remains as mysterious today as it was a century ago: at the subatomic level things are simultaneously particles and waves. Like the duck-rabbit illusion first described in 1899 by the Polish-born American psychologist Joseph Jastrow, subatomic reality appears to us as two different categories of being. But there is another paradox in play. Physics itself is riven by the competing frameworks of quantum theory and general relativity, whose differing descriptions of our world eerily mirror the wave-particle tension. When it comes to the very big and the extremely small, physical reality appears to be not one thing, but two. Where quantum theory describes the subatomic realm as a domain of individual quanta, all jitterbug and jumps, general relativity depicts happenings on the cosmological scale as a stately waltz of smooth flowing space-time. General relativity is like Strauss — deep, dignified and graceful. Quantum theory, like jazz, is disconnected, syncopated, and dazzlingly modern. Physicists are deeply aware of the schizophrenic nature of their science and long to find a synthesis, or unification. Such is the goal of a so-called ‘theory of everything’. However, to non-physicists, these competing lines of thought, and the paradoxes they entrain, can seem not just bewildering but absurd. In my experience as a science writer, no other scientific discipline elicits such contradictory responses. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude This schism was brought home to me starkly some months ago when, in the course of a fortnight, I happened to participate in two public discussion panels, one with a cosmologist at Caltech, Pasadena, the other with a leading literary studies scholar from the University of Southern Carolina. On the panel with the cosmologist, a researcher whose work I admire, the discussion turned to time, about which he had written a recent, and splendid, book. Like philosophers, physicists have struggled with the concept of time for centuries, but now, he told us, they had locked it down mathematically and were on the verge of a final state of understanding. In my Caltech friend’s view, physics is a progression towards an ever more accurate and encompassing Truth. My literary theory panellist was having none of this. A Lewis Carroll scholar, he had joined me for a discussion about mathematics in relation to literature, art and science. For him, maths was a delightful form of play, a ludic formalism to be admired and enjoyed; but any claims physicists might make about truth in their work were, in his view, ‘nonsense’. This mathematically based science, he said, was just ‘another kind of storytelling’. On the one hand, then, physics is taken to be a march toward an ultimate understanding of reality; on the other, it is seen as no different in status to the understandings handed down to us by myth, religion and, no less, literary studies. Because I spend my time about equally in the realms of the sciences and arts, I encounter a lot of this dualism. Depending on whom I am with, I find myself engaging in two entirely different kinds of conversation. Can we all be talking about the same subject? Many physicists are Platonists, at least when they talk to outsiders about their field. They believe that the mathematical relationships they discover in the world about us represent some kind of transcendent truth existing independently from, and perhaps a priori to, the physical world. In this way of seeing, the universe came into being according to a mathematical plan, what the British physicist Paul Davies has called ‘a cosmic blueprint’. Discovering this ‘plan’ is a goal for many theoretical physicists and the schism in the foundation of their framework is thus intensely frustrating. It’s as if the cosmic architect has designed a fiendish puzzle in which two apparently incompatible parts must be fitted together. Both are necessary, for both theories make predictions that have been verified to a dozen or so decimal places, and it is on the basis of these theories that we have built such marvels as microchips, lasers, and GPS satellites. Quite apart from the physical tensions that exist between them, relativity and quantum theory each pose philosophical problems. Are space and time fundamental qualities of the universe, as general relativity suggests, or are they byproducts of something even more basic, something that might arise from a quantum process? Looking at quantum mechanics, huge debates swirl around the simplest situations. Does the universe split into multiple copies of itself every time an electron changes orbit in an atom, or every time a photon of light passes through a slit? Some say yes, others say absolutely not. Theoretical physicists can’t even agree on what the celebrated waves of quantum theory mean. What is doing the ‘waving’? Are the waves physically real, or are they just mathematical representations of probability distributions? Are the ‘particles’ guided by the ‘waves’? And, if so, how? The dilemma posed by wave-particle duality is the tip of an epistemological iceberg on which many ships have been broken and wrecked. Undeterred, some theoretical physicists are resorting to increasingly bold measures in their attempts to resolve these dilemmas. Take the ‘many-worlds’ interpretation of quantum theory, which proposes that every time a subatomic action takes place the universe splits into multiple, slightly different, copies of itself, with each new ‘world’ representing one of the possible outcomes. When this idea was first proposed in 1957 by the American physicist Hugh Everett, it was considered an almost lunatic-fringe position. Even 20 years later, when I was a physics student, many of my professors thought it was a kind of madness to go down this path. Yet in recent years the many-worlds position has become mainstream. The idea of a quasi-infinite, ever-proliferating array of universes has been given further credence as a result of being taken up by string theorists, who argue that every mathematically possible version of the string theory equations corresponds to an actually existing universe, and estimate that there are 10 to the power of 500 different possibilities. To put this in perspective: physicists believe that in our universe there are approximately 10 to the power of 80 subatomic particles. In string cosmology, the totality of existing universes exceeds the number of particles in our universe by more than 400 orders of magnitude. Nothing in our experience compares to this unimaginably vast number. Every universe that can be mathematically imagined within the string parameters — including ones in which you exist with a prehensile tail, to use an example given by the American string theorist Brian Greene — is said to be manifest somewhere in a vast supra-spatial array ‘beyond’ the space-time bubble of our own universe. What is so epistemologically daring here is that the equations are taken to be the fundamental reality. The fact that the mathematics allows for gazillions of variations is seen to be evidence for gazillions of actual worlds. Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system This kind of reification of equations is precisely what strikes some humanities scholars as childishly naive. At the very least, it raises serious questions about the relationship between our mathematical models of reality, and reality itself. While it is true that in the history of physics many important discoveries have emerged from revelations within equations — Paul Dirac’s formulation for antimatter being perhaps the most famous example — one does not need to be a cultural relativist to feel sceptical about the idea that the only way forward now is to accept an infinite cosmic ‘landscape’ of universes that embrace every conceivable version of world history, including those in which the Middle Ages never ended or Hitler won. In the 30 years since I was a student, physicists’ interpretations of their field have increasingly tended toward literalism, while the humanities have tilted towards postmodernism. Thus a kind of stalemate has ensued. Neither side seems inclined to contemplate more nuanced views. It is hard to see ways out of this tunnel, but in the work of the late British anthropologist Mary Douglas I believe we can find a tool for thinking about some of these questions. On the surface, Douglas’s great book Purity and Danger (1966) would seem to have nothing do with physics; it is an inquiry into the nature of dirt and cleanliness in cultures across the globe. Douglas studied taboo rituals that deal with the unclean, but her book ends with a far-reaching thesis about human language and the limits of all language systems. Given that physics is couched in the language-system of mathematics, her argument is worth considering here. In a nutshell, Douglas notes that all languages parse the world into categories; in English, for instance, we call some things ‘mammals’ and other things ‘lizards’ and have no trouble recognising the two separate groups. Yet there are some things that do not fit neatly into either category: the pangolin, or scaly anteater, for example. Though pangolins are warm-blooded like mammals and birth their young, they have armoured bodies like some kind of bizarre lizard. Such definitional monstrosities are not just a feature of English. Douglas notes that all category systems contain liminal confusions, and she proposes that such ambiguity is the essence of what is seen to be impure or unclean. Whatever doesn’t parse neatly in a given linguistic system can become a source of anxiety to the culture that speaks this language, calling forth special ritual acts whose function, Douglas argues, is actually to acknowledge the limits of language itself. In the Lele culture of the Congo, for example, this epistemological confrontation takes place around a special cult of the pangolin, whose initiates ritualistically eat the abominable animal, thereby sacralising it and processing its ‘dirt’ for the entire society. ‘Powers are attributed to any structure of ideas,’ Douglas writes. We all tend to think that our categories of understanding are necessarily real. ‘The yearning for rigidity is in us all,’ she continues. ‘It is part of our human condition to long for hard lines and clear concepts’. Yet when we have them, she says, ‘we have to either face the fact that some realities elude them, or else blind ourselves to the inadequacy of the concepts’. It is not just the Lele who cannot parse the pangolin: biologists are still arguing about where it belongs on the genetic tree of life. As Douglas sees it, cultures themselves can be categorised in terms of how well they deal with linguistic ambiguity. Some cultures accept the limits of their own language, and of language itself, by understanding that there will always be things that cannot be cleanly parsed. Others become obsessed with ever-finer levels of categorisation as they try to rid their system of every pangolin-like ‘duck-rabbit’ anomaly. For such societies, Douglas argues, a kind of neurosis ensues, as the project of categorisation takes ever more energy and mental effort. If we take this analysis seriously, then, in Douglas’ terms, might it be that particle-waves are our pangolins? Perhaps what we are encountering here is not so much the edge of reality, but the limits of the physicists’ category system. In its modern incarnation, physics is grounded in the language of mathematics. It is a so-called ‘hard’ science, a term meant to imply that physics is unfuzzy — unlike, say, biology whose classification systems have always been disputed. Based in mathematics, the classifications of physicists are supposed to have a rigour that other sciences lack, and a good deal of the near-mystical discourse that surrounds the subject hinges on ideas about where the mathematics ‘comes from’. According to Galileo Galilei and other instigators of what came to be known as the Scientific Revolution, nature was ‘a book’ that had been written by God, who had used the language of mathematics because it was seen to be Platonically transcendent and timeless. While modern physics is no longer formally tied to Christian faith, its long association with religion lingers in the many references that physicists continue to make about ‘the mind of God’, and many contemporary proponents of a ‘theory of everything’ remain Platonists at heart. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’ In order to articulate a more nuanced conception of what physics is, we need to offer an alternative to Platonism. We need to explain how the mathematics ‘arises’ in the world, in ways other than assuming that it was put there there by some kind of transcendent being or process. To approach this question dispassionately, it is necessary to abandon the beautiful but loaded metaphor of the cosmic book — and all its authorial resonances — and focus, not the creation of the world, but on the creation of physics as a science. When we say that ‘mathematics is the language of physics’, we mean that physicists consciously comb the world for patterns that are mathematically describable; these patterns are our ‘laws of nature’. Since mathematical patterns proceed from numbers, much of the physicist’s task involves finding ways to extract numbers from physical phenomena. In the 16th and 17th centuries, philosophical discussion referred to this as the process of ‘quantification’; today we call it measurement. One way of thinking about modern physics is as an ever more sophisticated process of quantification that multiplies and diversifies the ways we extract numbers from the world, thus giving us the raw material for our quest for patterns or ‘laws’. This is no trivial task. Indeed, the history of physics has turned on the question of what can be measured and how. Stop for a moment and take a look around you. What do you think can be quantified? What colours and forms present themselves to your eye? Is the room bright or dark? Does the air feel hot or cold? Are birds singing? What other sounds do you hear? What textures do you feel? What odours do you smell? Which, if any, of these qualities of experience might be measured? In the early 14th century, a group of scholarly monks known as the calculatores at the University of Oxford began to think about this problem. One of their interests was motion, and they were the first to recognise the qualities we now refer to as ‘velocity’ and ‘acceleration’ — the former being the rate at which a body changes position, the latter, the rate at which the velocity itself changes. It’s a startling thought, in an age when we can read the speed of our cars from our digitised dashboards, that somebody had to discover ‘velocity’. Yet despite the calculatores’ advances, the science of kinematics made barely any progress until Galileo and his contemporaries took up the baton in the late-16th century. In the intervening time, the process of quantification had to be extracted from a burden of dreams in which it became, frankly, bogged down. For along with motion, the calculatores were also interested in qualities such as sin and grace and they tried to find ways to quantify these as well. Between the calculatores and Galileo, students of quantification had to work out what they were going to exclude from the project. To put it bluntly, in order for the science of physics to get underway, the vision had to be narrowed. How, exactly, this narrowing was to be achieved was articulated by the 17th-century French mathematician and philosopher René Descartes. What could a mathematically based science describe? Descartes’s answer was that the new natural philosophers must restrict themselves to studying matter in motion through space and time. Maths, he said, could describe the extended realm — or res extensa. Thoughts, feelings, emotions and moral consequences, he located in the ‘realm of thought’, or res cogitans, declaring them inaccessible to quantification, and thus beyond the purview of science. In making this distinction, Descartes did not divide mind from body (that had been done by the Greeks), he merely clarified the subject matter for a new physical science. So what else apart from motion could be quantified? To a large degree, progress in physics has been made by slowly extending the range of answers. Take colour. At first blush, redness would seem to be an ineffable and irreducible quale. In the late 19th century, however, physicists discovered that each colour in the rainbow, when diffracted through a prism, corresponds to a different wavelength of light. Red light has a wavelength of around 700 nanometres, violet light around 400 nanometres. Colour can be correlated with numbers — both the wavelength and frequency of an electromagnetic wave. Here we have one half of our duality: the wave. The discovery of electromagnetic waves was in fact one of the great triumphs of the quantification project. In the 1820s, Michael Faraday noticed that, if he sprinkled iron filings around a magnet, the fragments would spontaneously assemble into a pattern of lines that, he conjectured, were caused by a ‘magnetic field’. Physicists today accept fields as a primary aspect of nature but at the start of the Industrial Revolution, when philosophical mechanism was at its peak, Faraday’s peers scoffed. Invisible fields smacked of magic. Yet, later in the 19th century, James Clerk Maxwell showed that magnetic and electric fields were linked by a precise set of equations — today known as Maxwell’s Laws — that enabled him to predict the existence of radio waves. The quantification of these hitherto unsuspected aspects of our world — these hidden invisible ‘fields’ — has led to the whole gamut of modern telecommunications on which so much of modern life is now staged. Turning to the other side of our duality – the particle – with a burgeoning array of electrical and magnetic equipment, physicists in the late 19th and early 20th centuries began to probe matter. They discovered that atoms were composed from parts holding positive and negative charge. The negative electrons, were found to revolve around a positive nucleus in pairs, with each member of the pair in a slightly different state, or ‘spin’. Spin turns out to be a fundamental quality of the subatomic realm. Matter particles, such as electrons, have a spin value of one half. Particles of light, or photons, have a spin value of one. In short, one of the qualities that distinguishes ‘matter’ from ‘energy’ is the spin value of its particles. We have seen how light acts like a wave, yet experiments over the past century have shown that under many conditions it behaves instead like a stream of particles. In the photoelectric effect (the explanation of which won Albert Einstein his Nobel Prize in 1921), individual photons knock electrons out of their atomic orbits. In Thomas Young’s infamous double-slit experiment of 1805, light behaves simultaneously like waves and particles. Here, a stream of detectably separate photons are mysteriously guided by a wave whose effect becomes manifest over a long period of time. What is the source of this wave and how does it influence billions of isolated photons separated by great stretches of time and space? The late Nobel laureate Richard Feynman — a pioneer of quantum field theory — stated in 1965 that the double-slit experiment lay at ‘the heart of quantum mechanics’. Indeed, physicists have been debating how to interpret its proof of light’s duality for the past 200 years. Just as waves of light sometimes behave like particles of matter, particles of matter can sometimes behave like waves. In many situations, electrons are clearly particles: we fire them from electron guns inside the cathode-ray tubes of old-fashioned TV sets and each electron that hits the screen causes a tiny phosphor to glow. Yet, in orbiting around atoms, electrons behave like three-dimensional waves. Electron microscopes put the wave-quality of these particles to work; here, in effect, they act like short-wavelengths of light. physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions Wave-particle duality is a core feature of our world. Or rather, we should say, it is a core feature of our mathematical descriptions of our world. The duck-rabbits are everywhere, colonising the imagery of physicists like, well, rabbits. But what is critical to note here is that however ambiguous our images, the universe itself remains whole and is manifestly not fracturing into schizophrenic shards. It is this tantalising wholeness in the thing itself that drives physicists onward, like an eternally beckoning light that seems so teasingly near yet is always out of reach. Instrumentally speaking, the project of quantification has led physicists to powerful insights and practical gain: the computer on which you are reading this article would not exist if physicists hadn’t discovered the equations that describe the band-gaps in semiconducting materials. Microchips, plasma screens and cellphones are all byproducts of quantification and, every decade, physicists identify new qualities of our world that are amendable to measurement, leading to new technological possibilities. In this sense, physics is not just another story about the world: it is a qualitatively different kind of story to those told in the humanities, in myths and religions. No language other than maths is capable of expressing interactions between particle spin and electromagnetic field strength. The physicists, with their equations, have shown us new dimensions of our world. That said, we should be wary of claims about ultimate truth. While quantification, as a project, is far from complete, it is an open question as to what it might ultimately embrace. Let us look again at the colour red. Red is not just an electromagnetic phenomenon, it is also a perceptual and contextual phenomenon. Stare for a minute at a green square then look away: you will see an afterimage of a red square. No red light has been presented to your eyes, yet your brain will perceive a vivid red shape. As Goethe argued in the late-18th century, and Edwin Land (who invented Polaroid film in 1932) echoed, colour cannot be reduced to purely prismatic effects. It exists as much in our minds as in the external world. To put this into a personal context, no understanding of the electromagnetic spectrum will help me to understand why certain shades of yellow make me nauseous, while electric orange fills me with joy. Descartes was no fool; by parsing reality into the res extensa and res cogitans he captured something critical about human experience. You do not need to be a hard-core dualist to imagine that subjective experience might not be amenable to mathematical law. For Douglas, ‘the attempt to force experience into logical categories of non-contradiction’ is the ‘final paradox’ of an obsessive search for purity. ‘But experience is not amenable [to this narrowing],’ she insists, and ‘those who make the attempt find themselves led into contradictions.’ Quintessentially, the qualities that are amenable to quantification are those that are shared. All electrons are essentially the same: given a set of physical circumstances, every electron will behave like any other. But humans are not like this. It is our individuality that makes us so infuriatingly human, and when science attempts to reduce us to the status of electrons it is no wonder that professors of literature scoff. Douglas’s point about attempting to corral experience into logical categories of non-contradiction has obvious application to physics, particularly to recent work on the interface between quantum theory and relativity. One of the most mysterious findings of quantum science is that two or more subatomic particles can be ‘entangled’. Once particles are entangled, what we do to one immediately affects the other, even if the particles are hundreds of kilometres apart. Yet this contradicts a basic premise of special relativity, which states that no signal can travel faster than the speed of light. Entanglement suggests that either quantum theory or special relativity, or both, will have to be rethought. More challenging still, consider what might happen if we tried to send two entangled photons to two separate satellites orbiting in space, as a team of Chinese physicists, working with the entanglement theorist Anton Zeilinger, is currently hoping to do. Here the situation is compounded by the fact that what happens in near-Earth orbit is affected by both special and general relativity. The details are complex, but suffice it to say that special relativity suggests that the motion of the satellites will cause time to appear to slow down, while the effect of the weaker gravitational field in space should cause time to speed up. Given this, it is impossible to say which of the photons would be received first at which satellite. To an observer on the ground, both photons should appear to arrive at the same time. Yet to an observer on satellite one, the photon at satellite two should appear to arrive first, while to an observer on satellite two the photon at satellite one should appear to arrive first. We are in a mire of contradiction and no one knows what would in fact happen here. If the Chinese experiment goes ahead, we might find that some radical new physics is required. To say that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism You will notice that the ambiguity in these examples focuses on the issue of time — as do many paradoxes relating to relativity and quantum theory. Time indeed is a huge conundrum throughout physics, and paradoxes surround it at many levels of being. In Time Reborn: From the Crisis in Physics to the Future of the Universe (2013) the American physicist Lee Smolin argues that for 400 years physicists have been thinking about time in ways that are fundamentally at odds with human experience and therefore wrong. In order to extricate ourselves from some of the deepest paradoxes in physics, he says, its very foundations must be reconceived. In an op-ed in New Scientist in April this year, Smolin wrote: The idea that nature consists fundamentally of atoms with immutable properties moving through unchanging space, guided by timeless laws, underlies a metaphysical view in which time is absent or diminished. This view has been the basis for centuries of progress in science, but its usefulness for fundamental physics and cosmology has come to an end.In order to resolve contradictions between how physicists describe time and how we experience time, Smolin says physicists must abandon the notion of time as an unchanging ideal and embrace an evolutionary concept of natural laws. This is radical stuff, and Smolin is well-known for his contrarian views — he has been an outspoken critic of string theory, for example. But at the heart of his book is a worthy idea: Smolin is against the reflexive reification of equations. As our mathematical descriptions of time are so starkly in conflict with our lived experience of time, it is our descriptions that will have to change, he says. To put this into Douglas’s terms, the powers that have been attributed to physicists’ structure of ideas have been overreaching. ‘Attempts to force experience into logical categories of non-contradiction’ have, she would say, inevitably failed. From the contemplation of wave-particle pangolins we have been led to the limits of the linguistic system of physicists. Like Smolin, I have long believed that the ‘block’ conception of time that physics proposes is inadequate, and I applaud this thrilling, if also at times highly speculative, book. Yet, if we can fix the current system by reinventing its axioms, then (assuming that Douglas is correct) even the new system will contain its own pangolins. In the early days of quantum mechanics, Niels Bohr liked to say that we might never know what ‘reality’ is. Bohr used John Wheeler’s coinage, calling the universe ‘a great smoky dragon’, and claiming that all we could do with our science was to create ever more predictive models. Bohr’s positivism has gone out of fashion among theoretical physicists, replaced by an increasingly hard-core Platonism. To say, as some string theorists do, that every possible version of their equations must be materially manifest strikes me as a kind of berserk literalism, reminiscent of the old Ptolemaics who used to think that every mathematical epicycle in their descriptive apparatus must represent a physically manifest cosmic gear. We are veering here towards Douglas’s view of neurosis. Will we accept, at some point, that there are limits to the quantification project, just as there are to all taxonomic schemes? Or will we be drawn into ever more complex and expensive quests — CERN mark two, Hubble, the sequel — as we try to root out every lingering paradox? In Douglas’s view, ambiguity is an inherent feature of language that we must face up to, at some point, or drive ourselves into distraction. | Margaret Wertheim | https://aeon.co//essays/the-paradoxes-that-sit-at-the-very-core-of-physics | |
Mental health | I had a phobia of anyone hearing me play the piano. With practice, could my memory help me overcome it? | Most days, I play the piano for at least an hour, usually longer. But my dedication springs from fear not love. Of course, many soloists suffer from perfectly rational performance nerves, but my fear is less rational: it’s the fear of anybody, anybody at all, actually hearing me play and it’s so powerful it stops me from having piano lessons. Just thinking about somebody listening makes my brain and fingers freeze, the way your legs freeze in frightening dreams. Since I live in a city terrace, I can only play at all because I have a middle-sized Yamaha grand piano with a silencing facility. With the silencer on, nobody but me can hear anything but the clacking of the keys. It’s hard to know from where such a specific fear stems: I have no fear, for example, of speaking in public. I don’t even think the fear would extend to a different instrument. But as soon as I sit at the piano, I feel my father’s contempt for pianists who fancy themselves better than they are. If I could shout ‘I know my limitations, really I do!’ perhaps my fear would diminish. Instead, I’ve been trying to kick the fear into touch by memorising Bach’s Goldberg Variations. If I could memorise them, then with the score as backup, I would have double anti-fear ammunition. Myth has it that Bach composed his Variations to cure insomnia, though in scholarly circles this is now largely disbelieved. And he certainly didn’t compose them to cure a phobia. However, for me, the decision to memorise them made sense. First, the 30 variations are short, none more than two pages of musical score. Second, the girls in my novel-in-progress learn the variations, so I would learn with them — ‘method writing’, like ‘method acting’. Yet, at the same time, my decision made no sense. Short does not mean easy. Composed in separate and distinct voice-lines rather than tune and accompaniment, the Variations are difficult enough on the double manual harpsichord for which they were originally designed. On a piano, the technical demands of some of them are certainly beyond me. But those demands were part of the attraction. Hours of repetitive practice would be needed, and if I practised hard enough to memorise them, even knowing that music is not just a parade of memorised notes, perhaps my playing would become more automatic. In other words, could my memory beat my fear? Memory can, of course, trigger fear. Traumatic memories release adrenaline which, in turn, generates a response in the amygdala, the part of the brain that processes emotions. Just as Pavlov’s dogs learned to salivate when they heard a bell ring, part of my fear is certainly memorised habit: I expect to feel fear when I play the piano aloud, so I do. But this kind of memory is different in type to that used to learn a musical score. Techniques for memorising musical scores involve not the amygdala but the hippocampus — the part of the brain where memories of our experiences are ‘supported’, to use the scientific expression. Rather like a London cabbie doing ‘The Knowledge’, the constant exercising of my memory made it fitter Such techniques include photographic memory — seeing the score in the mind — and analytic memory, through which study of the composer’s methods and preferred musical patterns offer clues as to what comes next. Sadly, however, the photographic memory of my youth, such a useful exam tool, has long since collapsed, and I’m not musically trained enough to find analytic memory helpful. Still, all was not lost. I would employ ‘chunking’ (segments learned individually, then strung together), and I would embed these chunks into my muscle memory, or, to use its scientific name, procedural memory, since muscles themselves obviously have no memory. Procedural memory works by encoding and storing repeated actions in different sections of the brain: timing and coordination in the cerebellum, riding a bike in the putamen. Sometimes it’s positively unhelpful, for example when it generates ‘phantom limb’ sensations in amputees. Yet procedural memory offers one great hope to the fearful: since it’s not connected to emotional or experiential memory, and as long as you can reduce your automatic fear response, then even if fear paralyses you for a few seconds, your procedural memory should keep you going until you get the fear under control. Better still, procedural memory, being hard-wired into our brain circuitry, can survive mental catastrophe. The British musicologist Clive Wearing suffered a viral attack on his central nervous system, which left him with a memory capacity of between seven and thirty seconds. Nonetheless, he can still play the piano from his procedural memory — though he has no recollection of actually memorising the music. All this made me consider whether, were I to be afflicted with dementia, the Goldbergs would remain with me. And so I began, carefully marking-up fingering and repeating phrases until they were embedded, with the silencer firmly on, lest my endlessly repeated phrases drove the neighbours to murder. Some days, I forced myself to turn the silencer off because I needed to feel the fear to overcome it. Progress was extremely time-consuming, and this set me wondering. Why do concert pianists who do not suffer from paralysing fear bother to memorise, since the time drain must severely curtail their repertoire? In the year since I began, I’ve barely got to Variation 10. I know that young musicians carry dozens of works in their heads, in thrall perhaps to Clara Schumann’s declaration that playing from memory ‘gave her wings power to soar’. But, for older musicians, learning new work by heart presents an ever-worsening problem. Still, fashions change. Some 60 years ago, consensus among musicians was that playing from memory was a form of arrogance, elevating the player above the score. Beethoven and Chopin deplored memorisation, and when Mendelssohn, though a brilliant memoriser, arrived at one chamber concert to find that his part was missing, he asked the organiser to put up any old book and turn the pages — notwithstanding that the work he played was his own. As the difficulties of memorisation became real to me — some days even the variation order escaped me — I began to wonder whether our modern obsession with memorisation actually undermines confidence. Certainly, it alters the professional’s playing experience, adding a dimension of tension that audiences, bored with perfect recordings, might appreciate, but performers might dread. For me, however, memory was therapy. So I kept going. About seven months in, I made a discovery. Repetition wasn’t just teaching me the notes of the Goldberg, it was improving my technique; and as my technique improved (though at what was still a maddeningly slow rate) occasionally, from under my fingers, there emerged something that sounded like the actual music Bach wrote. On impulse, I returned to other music — Brahms, Schubert, Debussy, even Rachmaninov. My playing was not by any professional standard good, but it was no longer a permanent technical struggle. Up to a certain level, I could actually play. What’s more, rather like a London cabbie doing ‘The Knowledge’, the constant exercising of my memory had made it fitter: procedural memory, in other words was working. What about my fear-filled adrenalin levels? Well, oddly enough, my use of the silencer actually increased. A failure then? Not quite: almost without my noticing, an entirely different struggle began. My inability to play aloud still mattered, but now I wondered if memory might serve, not as a crutch to conquer fear, but as a springboard to transform my whole experience of the piano. This question has generated a whole new set of anxieties. My technical improvements are undeniable but how far can they take me? And will they last? There’s something else, too. Instead of becoming more confident, I’m becoming more searching: I notice more in the music. Hours of memorising have not mastered my fear: rather, they have taught me that music cannot be learned by rote, unless you have cloth-ears. Nothing is automatic in the way that I’d imagined. In fact, my original quest was misconceived: physical familiarity with a phrase did not mean I didn’t think — far from it. Physical familiarity made me think more. So, here I sit, silencer on, my fear of playing aloud as strong as ever. But that fear is no longer all-consuming. It would be nice to play aloud. It would be wonderful to have piano lessons. However, these days when I practise Bach, I’m less conscious of my fear and more conscious of being in touch with something greater than myself. If I play with the silencer on the world will not lose out. If I play with it off, I might, in the paralysis of my fear, lose all I have gained. I think it’s what you call a no-brainer. | Katie Grant | https://aeon.co//essays/one-year-30-variations-could-i-master-my-fears | |
Mental health | Governments are now providing free psychotherapy to their citizens. Is there a limit to state-sponsored happiness? | ‘Who wants to go first?’ It’s 2001. There are eight of us sitting in a circle. I’m in a support group for sufferers of social anxiety, which meets every Thursday evening in a corner of the Royal Festival Hall. It’s the one place where it’s OK for us to discuss the panic attacks, loneliness, inhibition and low self-esteem that people with social anxiety feel. The group is a raft, and we are slowly learning to paddle. There’s no therapist on board. One of the group downloaded an audio course in Cognitive Behavioural Therapy (CBT). For 10 weeks I have been listening to the voice of Dr Thomas Richards, a cognitive therapist from Phoenix, Arizona, and following his advice. ‘Acceptance is an active experience,’ Dr Richards tells me. ‘Acceptance is an active experience,’ I repeat to myself on the bus. ‘I refuse to let my negative thoughts bully me,’ says Dr Richards. ‘Refuse to let negative thoughts bully you,’ I write in my journal. I incant his handouts for 30 minutes each day, hoping that my flatmates don’t overhear me. But it seems to work. And then, on Thursday evenings, the group gathers to share our progress, and to practise exercises such as the ‘circle of death’. Some of us are getting better, for others it is slower. We keep paddling. That support group was a lifesaver for me. I’d suffered from social anxiety and depression through my late teens and early 20s. After a few weeks of the group, I stopped having panic attacks, and over the next few years, the depression slowly departed. Life got better. And the experience triggered a fascination with the origins of CBT. In March 2007, I travelled to New York to interview the pioneer of cognitive therapy, Albert Ellis, not long before he died. He’d trained as a psychoanalyst in the 1940s, but became frustrated with how little progress his clients made. He looked around for alternative ways to approach the emotions, and went back to his first love: ancient philosophy. One line particularly resonated with him, from the Stoic philosopher Epictetus: ‘Men are disturbed not by events, but by their opinion about events.’ Those 12 words launched the cognitive revolution in psychotherapy. That revolution is now being embraced by national governments seeking to make the lives of their citizens happier, or at least freer from depression and anxiety. Five years ago, amid intense opposition from some part of the psychotherapeutic profession, the Labour government in the UK launched a programme called Improving Access to Psychological Therapies (IAPT). It aimed to train 6,000 new therapists in talking therapies — mainly CBT — by 2014, and to treat around one million people a year for depression and anxiety. As Nick McNulty, a therapist in the IAPT centres for Southwark and Lambeth in south London told me: ‘It is the biggest expansion of mental health services anywhere in the world, ever.’ A similar programme is also running in Sweden. The stakes are high: for the workers of IAPT, this is a bold new experiment in bringing free therapy — and potentially a happier life — to millions of people who might otherwise never have got help. But some private therapists worry that the new service has been overhyped, and might give talking therapies a bad reputation with individuals and governments. Others say that the intimate, one-on-one relationship of psychotherapy is unsuited to the cold bureaucracy and number-crunching of a state-based system such as Britain’s National Health Service (NHS). And why does IAPT mainly offer CBT, while ignoring other therapeutic approaches? ‘A lot of therapists are hoping IAPT will fail,’ McNulty says. What if governments started to take happiness data as seriously as they took unemployment or inflation? One element that grates on many in the profession is the quantitative measurement of outcomes. Cognitive therapies have been interested in measurable results from the start, and governments have embraced the apparent ease with which changes in individual mood and happiness can be measured and monitored. The organisers of IAPT see themselves as introducing a new level of transparency and outcome-measurement into psychotherapy. Their opponents see this as a crude, even Orwellian, approach to the subtle nuances of an individual’s personal story. At each session of therapy, an IAPT ‘service user’ is asked to fill in a questionnaire assessing how they feel and what progress they think they’ve made since the previous session. The forms help the therapist and service user keep track of the therapy, and the results are also fed, anonymously, into a vast national data pool that is accessible by anyone through the NHS Information Centre. This ‘emotions database’ makes it possible for anyone to search online and see outcome results by disorder, by treatment, by gender, by ethnicity, and by IAPT centre. It’s psychotherapy for the age of big data. And it all started because of a chance meeting at a tea party. Lord Richard Layard made his reputation as a labour economist in the 1990s at the London School of Economics. Layard specialised in unemployment and inequality, although he’d always had an interest in depression and happiness. Perhaps this was due to the influence of his father, the anthropologist John Layard, who suffered from severe depression, attempted suicide, was analysed by Carl Jung, and eventually retrained as a Jungian psychologist. Lord Layard himself has always been more focused on hard data than on the collective unconscious: in the 1990s he become interested in a new field in economics that tried to measure individuals’ happiness and use the data to guide public policy. Layard wondered: what if governments started to take happiness data as seriously as they took unemployment or inflation? Any government that could provide a highly effective mental health service, improving the resilience and happiness of individuals, would raise the levels of well-being in the nation as much as other, more conventional, economic policies. At a tea party for new Fellows at the British Academy in 2003, Layard struck up a conversation with the man standing next to him, David Clark. ‘It was a fortuitous meeting,’ Layard told me. Synchronicity, his father might have said. Clark was, in fact, a leading British practitioner of CBT: a professor of psychology at King’s College London and the director of the Centre for Anxiety Disorders and Trauma at the Maudsley Hospital, who had helped to set up a CBT trauma centre in Omagh in Northern Ireland, after the IRA bombing of that town in 1998. Clark explained to Layard that clinical and field trials of CBT showed recovery rates of around 50 per cent for depression and anxiety disorders. He also explained that there was very little CBT (or any other talking therapy) available via the NHS for common problems such as depression. Layard, nothing if not a doer, decided he wanted to ‘get something done about mental health’. So, at the age of 70, that is what he did. With Clark’s help, he assembled a powerful argument for the British government to increase its spending on CBT. Depression and anxiety affect one in six of the UK population: mental health problems as a whole cause not only human suffering, but also an estimated £105 billion each year in lost productivity and incapacity benefits. Yet less than one per cent of the NHS budget was being spent on talking therapies, despite recommendations from the government’s own National Institute for Health and Care Excellence (NICE) to increase it. Layard and Clark recommended tripling this budget, and training 6,000 new therapists. Ellis revived the Greeks’ idea that philosophy can teach us to, in Socrates’ phrase, ‘take care of our souls’ Layard and Clark’s recommendations were accepted by the re-elected Labour government in 2005 and put into action under Clark’s direction. In a radical move for the NHS, it allowed people to come to the service without first going through a GP. For mild cases of depression and anxiety, people would be treated by ‘psychological well-being practitioners’, who had a year’s training in CBT, and who would provide ‘psycho-education’ and guided self-help, often over the phone or online. For more severe cases, people were encouraged to ‘step up’ to more intensive face-to-face therapy for a longer period of time, with a fully trained therapist. Finally, IAPT centres would offer only government-endorsed, evidence-based therapies, which meant mainly CBT, and would measure outcomes at every therapy session, making this data available online, so that both patients and politicians could see the results. The programme had a lot of enemies from birth, particularly among private therapists, for whom the new free service was a profound threat to their livelihood (just as many doctors had opposed the creation of the NHS in the early post-war period). Some were understandably indignant that NICE recommended only CBT for depression and anxiety, while ignoring many other therapeutic approaches, such as psychoanalysis. The British psychoanalyst and writer Darian Leader was vociferous in his criticism of the programme. CBT, he wrote in The Guardian in September 2008, was a shallow quick fix. It represented the triumph of the free market, although Leader also suggested that IAPT was reminiscent of Mao’s cultural revolution, in which the masses were forcibly indoctrinated in ‘right thinking’. Leader particularly disliked CBT’s emphasis on quantifiable evidence: ‘In today’s outcome-obsessed society,’ he wrote, ‘people must become countable, quantifiable, transparent’. Only the intimate, long-term therapeutic relationship envisioned by Sigmund Freud and his followers could offer genuine help to the neurotic. When I interviewed Leader for Psychologies magazine in September 2011, he warned that, as soon as therapists tried to meet political well-being targets, they would be simply ‘imparting the values of the state in the counselling room’. In fact, the philosophical origins of CBT lie neither in Maoism nor in neoliberal capitalism but, as we have seen, in the ancient Greek philosophy of Stoicism. The Stoics believed that our emotions are inextricably linked to our beliefs, values and interpretations of the world. We can heal ourselves of emotional disturbances by learning to examine our unconscious beliefs and interpretations and to ask if they are accurate or wise. If not, we can choose to think differently, and our feelings will eventually follow. However, we are forgetful creatures, so we have to repeat our new, wise attitudes until they become habits of thinking. We also need to practise our beliefs, in real life, so that they become habits of behaviour. When Albert Ellis became frustrated with psychoanalysis in the 1950s, he adapted many of the ancients’ techniques for changing the self: repeating maxims, keeping track of your progress in a journal, the emphasis on fieldwork or behavioural therapy. He called his approach Rational Emotive Behaviour Therapy, and launched it in 1955, founding the Institute for Rational Living in New York in 1959. Single-handedly, he revived the Greeks’ idea that philosophy can teach us to, in Socrates’ phrase, ‘take care of our souls’ — the literal meaning of the word ‘psychotherapy’, to heal the soul or spirit. But Socrates and the Stoics believed that the ultimate purpose of philosophy was a religious one: to become completely free of attachments or aversions to external things, and at one with the divine. Ellis, a fervent atheist and advocate of free love, was more Epicurean. The goal of his therapy was to stop people beating themselves up so they could, as he famously put it, ‘have a fucking ball’. His iconoclastic approach made Ellis a famous and influential figure in modern psychology, but it was another psychologist, Aaron Beck, a professor at the University of Pennsylvania, who turned CBT into the all-conquering juggernaut it is today. Beck developed CBT in the early 1960s. When I interviewed him in 2007, he told me that he was ‘also influenced by the Stoics, who stated that it was the meaning of events rather than the events themselves that affected people. When this was articulated by Ellis, everything clicked into place.’ While Ellis was content to be a freewheeling rebel, Beck was more of an institution-builder. He wanted to transform clinical psychotherapy from within, by building up a base of empirical evidence for cognitive therapy. In a liberal, multicultural society, the state has no business telling us what the meaning of life is Before Beck, evidence for psychotherapy mainly consisted of therapists’ case studies. The reputation of psychoanalysis, for example, was built on a handful of canonical case studies written up by Freud, such as ‘the Wolfman’, ‘Dora’, and ‘Anna O’. The problem with that approach was that the evidence was anecdotal, non-replicable, and relied strongly on the therapist’s own account of a patient’s progress. A therapist might exaggerate the success of a treatment, as Freud arguably did in the foundational case of Anna O in 1895. Beck’s radical innovation was to develop a questionnaire that asked patients how they felt on a four-point scale. In 1961, he created the Beck Depression Inventory, a 21-question survey that measured a person’s beliefs and emotional state through questions such as: 0 I do not feel like a failure. 1 I feel I have failed more than the average person. 2 As I look back on my life, all I can see is a lot of failures. 3 I feel I am a complete failure as a person. By measuring the intensity of a person’s negative beliefs and feelings, Beck discovered a way to quantify emotions and turn them into data. Using the Inventory, or BDI, he could quantify how a person felt before a course of CBT, and how he felt after it. According to the BDI, after 10 to 20 weeks of CBT, around half of people with depression no longer met the diagnostic criteria for major depressive disorder. And, crucially, this result was replicable in randomised controlled trials by other therapists. CBT showed similar recovery rates for anxiety disorders such as social anxiety and post-traumatic stress disorder. Beck launched the era of ‘evidence-based therapy’. However, in doing so, he made some drastic alterations to the ancient philosophy that inspired both him and Ellis. He pruned out anything that was not scientifically measurable — including any mention of the Divine or the Logos, virtue or vice, the good society, or our ethical obligations to other people. I once asked Beck if he agreed with Plato that certain forms of society encouraged particular emotional disorders. He replied: ‘I am loath to toss out an opinion that is not based on empirical evidence.’ There is much about which CBT is silent. It teaches you how to steer the self, but does not tell you where you should steer it to, nor what form of society might encourage us to flourish. Mass government sponsorship of CBT therapy, such as the IAPT programme in the UK, embodies an interesting moment not just in the history of psychotherapy, but in the history of philosophy too. This is an attempt to teach Stoic — or ‘Stoic-lite’ — self-governance techniques to millions of people, an exercise in adult education as much as health care. The scale of it is beyond the dreams of the ancient Stoics, teaching on the street corners of Athens. Although the early Stoics wrote political works, these were all lost in antiquity, and later Roman Stoics viewed Stoicism more as a kind of self-help for the elite. Marcus Aurelius, the Stoic emperor of Rome, was in a position to spread Stoicism to the entire empire if he so wished, but he had a pessimistic sense of the limit of politics. ‘Do not expect Plato’s ideal commonwealth,’ he wrote in Meditations. ‘[For] who can hope to alter men’s convictions; and without change of conviction what can there be but grudging subjection and feigned assent?’ Few could have foreseen that Stoicism’s therapy of the emotions could be taught by the state to the masses. The Scottish philosopher David Hume wrote in his essay ‘The Sceptic’ (1742) that the majority of humanity is ‘effectually excluded from all pretensions to philosophy, and the medicine of the mind, so much boasted… The empire of philosophy extends over a few; and with regard to these, too, her authority is very weak and limited.’ The early results of the IAPT programme in the UK have been better than Hume might have predicted, with recovery rates of 42 per cent, based on at least two sessions of treatment. IAPT is now being rolled out into other areas of the welfare system: child services; the treatment of chronic physical conditions that have an emotional toll; and into the treatment of unexplained conditions such as Chronic Fatigue Syndrome, or ME. An IAPT-style programme is also being piloted in Norway. Practical problems have been reported: staff are sometimes overstretched and can find themselves treating serious disorders for which they haven’t been trained. And measuring success has proven complex: should it be judged by solely by ‘recovery rates’ which might encourage centres to take on only the easy cases, to keep those rates high and their funding secure? A deeper criticism is that CBT focuses too much on a person’s ‘thinking errors’, and not on the genuine environmental adversity she might face, such as poverty or unemployment. Stoicism, too, often ignored politics and instead emphasised the individual’s capacity to find happiness in the ‘inner citadel’ of his mind: both CBT and Stoicism can tend to be politically quietist. Stoicism might have failed to become a ‘mass philosophy’ in ancient Rome because, unlike Christianity, it never understood the importance of close, loving communities. CBT certainly can be atomistic — particularly ‘computerised CBT’, where the therapist disappears altogether and your relationship is with a computer program (though interestingly, it still seems to work for many people). Seligman believes that governments should fund the dissemination of this science of flourishing to their citizens, just as the Medici used their fortune to disseminate Platonic philosophy into Florentine culture in the 15th century Professor Clark insists that IAPT always had a social dimension: it doesn’t just focus on changing thoughts and feelings, but also includes social care such as debt counselling and employment advice. He told me ‘It’s not an either/or. We try and help as much as we can with the social adversity, and also to equip people with the mental skills to manage that adversity.’ For me, back in 2001, it was the group experience of CBT that was particularly powerful. IAPT did not initially have much emphasis on group therapy, but some centres have shifted to incorporate more group work. Kate Hannay, therapist at the South London and Maudsley IAPT, tells me: ‘Our therapy group for depression is very popular. People attend it religiously and love the socialising as much as the therapy.’ IAPT is also learning to link up with community support groups, which help to fill in the things that CBT overlooks: music, dance, sport, activism, faith. Healing the wounded psyche is one thing, but how far should a national push for ‘happiness’ go? Ancient Greek philosophers such as Plato and Aristotle believed that the state should actively guide citizens towards eudaimonia, or flourishing. They believed that eudaimonia was discoverable by experts (them), and could be taught to the citizenry or, at least, to the rich, male citizenry. CBT does not go that far. It is not an ideology or religion. It does contain implicit moral virtues (including the Socratic virtues of self-knowledge and self-discipline) but no explicit dogma about the good life or the good society. You could say that it therefore suffers from a ‘virtue gap’ or a ‘meaning deficit’, but I would argue that, in a liberal, multicultural society, the state has no business telling us what the meaning of life is. CBT might be cautious about prescribing how people should live, but Positive Psychology — which draws on the findings of CBT, combining it with the humanistic approach of psychologists such as Abraham Maslow — is less coy. It was created by Martin Seligman, a younger colleague of Aaron Beck’s at the University of Pennsylvania. Seligman wrote in 1998 that psychology should be able to help ‘to build thriving individuals, families, and communities’. Positive psychology uses CBT-style techniques not just to help people overcome emotional disorders, but to help all of us achieve greater flourishing. Seligman identifies five elements to flourishing: positive emotion; engagement or flow; relationships; meaningfulness; and achievement. All these Seligman argues, can, be scientifically measured using questionnaires, similar to the Beck Depression Inventory. Seligman believes that governments should fund the dissemination of this science of flourishing to their citizens, just as the Medici used their fortune to disseminate Platonic philosophy into Florentine culture in the 15th century. He has had some remarkable successes. In 2010, the US Army spent $125 million on a new ‘resilience training’ programme designed by Seligman and his team. Every American soldier must now take a battery of psychometric tests each year, to measure their emotional and spiritual fitness. Meanwhile in Europe in December 2011, the president of the European Council, Herman Van Rompuy, sent a copy of The World Book of Happiness by Leo Bormans to 200 world leaders, urging them to make Positive Psychology the cornerstone of public policy. ‘It is time,’ he declared, ‘to make this knowledge available to the man and woman in the street.’ It’s at this point that I get a little nervous. I welcome CBT’s focus on measuring outcomes: the absence of such evidence in therapy has allowed some very bad ideas to cause real harm to people’s lives. But there’s a limit to what can be measured empirically. It’s disingenuous for Positive Psychology to suggest that you can scientifically measure a person’s level of flourishing: it depends on how you define flourishing. Not all definitions of flourishing are measurable with questionnaires — how can a questionnaire measure a person’s virtue or achievements, the meaningfulness of their life, or their closeness to God? As a liberal, I support the state’s provision of CBT for two reasons: first, it leaves open the question of the meaning of life, and second, it’s effective and people ask for it. (As an aside, I do think there should be a more diverse range of evidence-based therapy available.) It’s quite another matter for the state to impose a comprehensive theory of the good life on their citizens without their consent. I am wary of governments using pseudoscience to smuggle moral paternalism in through the back door. A society in which we are all quantified according to our adherence to Positive Psychology would be a regression into the Middle Ages. This is why I think that grassroots philosophy has a crucial role to play in the nascent ‘politics of well-being’. If governments want to help us to flourish, then we should have places and courses where we can decide for ourselves what flourishing means. While psychology can measure the efficacy of a self-help technique, philosophy teaches us to consider what is the proper end or goal of those techniques. It teaches us something not easily measured by questionnaires: the capacity to think for ourselves. It’s May 2013. There are eight of us sitting in a circle. We’re at a ‘recovery college’ in south London, where people come to learn new skills and broaden their horizons. These people are ‘recovering’ in the broadest sense, from addiction, from mental illness, from homelessness, from the hard knocks of life. I’m running a workshop, explaining how CBT took ideas and techniques from ancient philosophy. We talk about Socrates, about Epictetus, about Diogenes the Cynic. We talk about how the Greek philosophers had various competing definitions of the good life, and I ask the group what their own definitions are. One girl in her early 20s, who used to be homeless, says: ‘I agree with Diogenes the Cynic: I can find inner harmony even in situations like living on the streets. To many people that would be a crisis, but to me, it’s a temporary setback.’ Another lady, recovering from OCD, says quietly: ‘The good life is a roof over my head, and peace within.’ The last to reply is Ronnie, an exuberant young black man in Kanye West sunglasses, who has been by far the most vocal member of the group. Ronnie earlier appeared enthusiastic about Stoic philosophy and the idea of resisting the toxic influence of conventional culture. ‘So, Ronnie,’ I ask him. ‘What’s your philosophy of the good life?’ Ronnie thinks for a bit. ‘Mad money, mad labels, and loads of women,’ he decides finally. ‘I know I’m brainwashed, I know that’s all from MTV Cribs. But I still like it. I choose to be brainwashed.’ And that is Ronnie’s right, whatever any expert says. | Jules Evans | https://aeon.co//essays/should-the-state-legislate-for-individual-happiness | |
Art | Repairing things is about more than thrift. It is about creating something bold and original | The 16th-century Japanese tea master Sen no Rikyū is said to have ignored his host’s fine Song Dynasty Chinese tea jar until the owner smashed it in despair at his indifference. After the shards had been painstakingly reassembled by the man’s friends, Rikyū declared: ‘Now, the piece is magnificent.’ So it went in old Japan: when a treasured bowl fell to the floor, one didn’t just sigh and reach for the glue. The old item was gone, but its fracture created the opportunity to make a new one. Smashed ceramics would be stuck back together with a strong adhesive made from lacquer and rice glue, the web of cracks emphasised with coloured lacquer. Sometimes the coating was mixed or sprinkled with powdered silver or gold and polished with silk so that the joins gleamed; a bowl or container repaired in this way would typically be valued more highly than the original. According to Christy Bartlett, a contemporary tea master based in San Francisco, it is this ‘gap between the vanity of pristine appearance and the fractured manifestation of mortal fate which deepens its appeal’. The mended object is special precisely because it was worth mending. The repair, like that of an old teddy bear, is a testament to the affection in which the object is held. A similar principle was at work in the boro garments of the Japanese peasant and artisan classes, stitched together from scraps of cloth at a time when nothing went to waste. In boro clothing, the mends become the object. Some garments, like the fabled ship of Theseus, might eventually be overwhelmed by patches; others were assembled from scraps at the outset. In today’s trendy Tokyo markets, the technique risks becoming a mere ethnic pose. But boro was always an aesthetic idea as much as an imposition of hardship. Although quite different in their social status, boro and the aesthetic of repaired ceramics alike draw on the Japanese tradition of wabi-sabi, a world view that acknowledges transience and imperfection. To mend a pot, one must accept whatever its fracture brings: one must aspire to mushin — literally ‘no mind’ — a state of detachment sought by both artists and warriors. As Bartlett explains in her essay ‘A Tearoom View of Mended Ceramics’ (2008): ‘Accidental fractures set in motion acts of repair that accept given circumstances and work within them to lead to an ultimately more profound appearance.’ Mended ceramics displayed their history — the pattern of fracture disclosing the specific forces and events that caused it. Indeed, earlier this year, a team of French physicists from the Aix-Marseille University demonstrated that the starlike cracks in broken glass plates capture a forensic record of the mechanics of the impact. By reassembling the pieces, that moment is preserved. The stories of how mended Japanese ceramics had been broken in the first place — like that of the jar initially spurned by Rikyū — would be perpetuated by constant retelling. In the tea ceremony these histories of the utensils provide raw materials for the stylised conversational puzzles that the host sets his guests. For years, I have been patching clothes into a kind of makeshift, barely competent boro. Trousers in particular get colonised by patches that start at the knees and at the holes poked by keys around my pockets, spreading steadily across thighs with increasing disregard for colour matching. Only when patches need patches does the recycling bin beckon. At first I did this as a hangover from student privation. Later it became a token of ecological sensibility. Those changing motives carried implications for my appearance: the more defiantly visible the mend, the less it risks looking like mere penny-pinching. That’s a foolishly self-conscious consideration, of course, which is why the Japanese aesthetic of repair is potentially so liberating: there is nothing defensive about it. This feels like rather a new idea in the pragmatic West. But things might be changing. Take, for example, the all-purpose mending putty called Sugru, an adhesive silicone polymer that you can hand-mould to shape and then leave overnight to set into a tough, flexible seal. As its website demonstrates, you can use Sugru for all those domestic repairs that are otherwise all but impossible, from cracked toilet seats to split shoes or the abraded insulation on your MacBook mains lead. (Doesn’t it always split where it enters the power brick? And isn’t it exorbitantly costly to replace?) Sugru was devised by Jane Ní Dhulchaointigh, an Irish design graduate at the Royal College of Art in London, working with a group of retired industrial chemists. Time magazine pronounced it a top invention of 2010, and it has since acquired an avid following of ‘hackers’ who relish its potential not just to repair off-the-shelf products, but also to modify them. It wasn’t so much that things stopped working and then got repaired, but that repair was the means by which they worked at all Sugru doesn’t do its job subtly, which is the point. You can get it in modest white, but fans tend to prefer the bright primary colours, giving their repairs maximal visibility. They present mending not as an unfortunate necessity to be carried out as quietly as possible but as an act worth celebrating. A similar attitude is found in the burgeoning world of ‘radical knitting’. Take the textiles artist Celia Pym, who darns people’s clothes as a way of ‘briefly making contact with strangers’. There are no ‘invisible mends’ here: Pym introduces bold new colours and patterns, transforming rather than merely repairing the garments. What Pym and the Sugru crew are asserting is that mending has an aesthetic as well as a practical function. They say that if you’re going to mend, you might as well do it openly and beautifully. Their approaches also reflect another of the aesthetic considerations of Japanese ceramic repairs: the notion of asobi, a kind of playful creativity introduced by the 16th-century tea master Furuta Oribe. Repairs that embody this principle tended to be more extrovert, even crude in their lively energy. When larger areas of damage had to be patched using pieces from a different broken object, one might plug the gap using fragments that have a totally different appearance, just as clothes today might be patched with exuberant contrasting colours or patterns. Of course, one can now buy new clothes patched this way — a mannered gesture, perhaps, but one anticipated in the way that Oribe would sometimes deliberately damage utensils so that they were not ‘too perfect’. This was less a Zen-like expression of impermanence than an exuberant relish of variety. Such modern fashion statements aside, repair in the West has tended to be more a matter of grumbling and making do. But occasionally the aesthetic questions have been impossible to avoid. When the painting of an Old Master starts cracking and flaking off, what is the best way to make it good? Should we reverently pick up the flakes of paint and surreptitiously glue them back on again? Is it honest to display a Raphael held together with PVA glue? When Renaissance paint fades or discolours, should we touch it up to retain at least a semblance of what the artist intended, or surrender to wabi-sabi? It’s safe to assume that no conservator would ever have countenanced the ‘repair’ last year of the crumbling 19th-century fresco of Jesus in Zaragoza — Ecco Homo by Elías García Martínez — by an elderly churchgoer with the artistic skills of Mr Bean. But does even a skilled ‘retouching’ risk much the same hubris? These questions are difficult because aesthetic considerations pull against concerns about authenticity. Who wants to look at a fresco if only half of it is still on the wall? Victorian conservators were rather cavalier in their solutions, often deciding it was better to have a retouched Old Master than none at all. In an age that would happily render Titian’s tones more ‘acceptable’ with muddy brown varnish, that was hardly surprising. But today’s conservators mostly recoil at the idea of painting over damage in old works, although they will permit some delicate ‘inpainting’ that fills cracks without covering any of the original paint. Cosimo Tura’s Allegorical Figure (c. 1455) in the National Gallery in London was repaired this way in the 1980s. Where damage is extensive, it is now common to apply treatments that prevent further decay but leave the existing damage visible. Such rarefied instances aside, the prejudice against repair as an embarrassing sign of poverty or thrift is surely a product of the age of consumerism. Mending clothes was once routine for every stratum of society. British aristocrats were unabashed at their elbow patches — in truth more prevention than cure, since they protected shooting jackets from wear caused by the shotgun butt. Everything got mended, and mending was a trade. What sort of trade? Highly skilled, perhaps, but manual, consigning it to a low status in a culture that has always been shaped by the ancient Greek preference for thinking over doing (this is one way in which the West differs from the East). Over the course of the 19th century, the ‘pure’ theorist gained ascendancy over the ‘applied’ scientist (or worse still, the engineer); likewise, the professional engineer could at least pull rank on the maintenance man: he was a creator and innovator, not a chap with oily rag and tools. ‘Although central to our relationship with things,’ writes the historian of technology David Edgerton, ‘maintenance and repair are matters we would rather not think about.’ Indeed, they are increasingly matters we’d rather not even do. Edgerton explains that, until the mid-20th century, repair was a permanent state of affairs, especially for expensive items such as vehicles, which ‘lived in constant interaction with a workshop’. It wasn’t so much that things stopped working and then got repaired, but that repair was the means by which they worked at all. Repair might even spawn primary manufacturing industries: many early Japanese bicycles were assembled from the spare parts manufactured to fix foreign (mostly British) models. It’s not hard to understand a certain wariness about repair: what broke once might break again, after all. But its neglect in recent times surely owes something to an underdeveloped repair aesthetic. Our insistence on perfect appearances, on the constant illusion of newness, applies even to our own bodies: surgical repairs are supposed to make our own wear and tear invisible, though they rarely do. Equally detrimental to a culture of mending is the ever more hermetic nature of technology. DIY fixes become impossible either physically (the unit, like your MacBook lead, is sealed) or technically (you wouldn’t know where to start). Either way, the warranty is void the moment you start tinkering. Add that to a climate in which you pay for the service or accessories rather than for the item — inks are pricier than printers, mobile phones are free when you subscribe to a network — and repair lacks feasibility, infrastructure or economic motivation. Breakers’ yards, which used to seem like places of wonder, have all but vanished; car repair has become both unfashionable and impractical. I gave up repairing computer peripherals years ago when the only person I could find to fix a printer was a crook who lacked the skills for the job but charged me the price of a new one anyway. Some feel this is going to change — whether because of austerity or increasing ecological concerns about waste and consumption. Martin Conreen, a design lecturer at Goldsmiths College in London, believes that TV cookery programmes will soon be replaced by ‘how to’ DIY shows, in which repair would surely feature heavily. The hacker culture is nurturing an underground movement of making and modifying that is merging with the crowdsourcing of fixes and bodges — for example, on websites such as ifixit.com, which offers free service manuals and advice for technical devices such as computers, cameras, vehicles and domestic appliances. Alternatively there is fixperts.org, set up by the design lecturer Daniel Charny and Sugru’s co-founder, James Carrigan, which documents fixes on film. The mending mindset has taken to the streets in the international Repair Café movement, where you can get free tools, materials, advice and assistance for mending anything from phones to jumpers. As 3D printers — which can produce one-off objects from cured resin, built up from granular ‘inks’, layer by layer — become more accessible, it might become possible to make your own spare parts rather than having to source them, often at some cost, from suppliers (only to discover your model is obsolete). And as fixing becomes cool, there’s good reason to hope it will acquire an aesthetic that owes less to a ‘make do and mend’ mentality of soldiering on, and more to mushin and asobi. | Philip Ball | https://aeon.co//essays/its-time-to-rediscover-the-art-and-beauty-of-repair | |
Architecture | Heady with beauty, in cherry tree season Japan celebrates environmental values that Western greens have lost | It is peak sakura — the short, spring season of cherry-tree flowering that so besots Japan. In Ueno Park in Tokyo, falling blossoms settle over the sleeping salarymen, recumbent on tarpaulins with traffic masks yanked down around their necks. Curtains of petals draw open and closed in the wind around huddled teenagers. The flowers land on bitumen and bare soil, sometimes drifting into the open food containers of gathered observers. A distracted child places a piece of yellow eel, festooned with sakura, into her mouth. For all their abundance, these branches are more likely to produce the candied maraschino cherries used to trim cocktails than the grocer’s fruit with which we are familiar. Japan’s urban cherries are ornamental, neutered cousins of orchard varieties. Planted to mark out places or events of note, some of the trees are thought to be more than 1,000 years old. Despite their lack of edible fruit, for two to three weeks in late March or early April, the city’s cherries become the most important trees in Japan. The nation’s climatic range triggers a staggered cherry flowering — a ‘blossom front’ that is monitored by the Japanese tourism agency as it sweeps up from Fukuoka, through Tokyo and north towards Sapporo. The cherries, and late, slow-moving plum flowers, jostle as they race around the Japanese Alps (cold snaps advantage the plum buds). When the sprays of blossom finally break open through the capital, their momentum is as forceful as floodwaters returning. The cherries’ high, white foam pours through avenues that lead to shrines, into graveyards, over public lands, and then to the brink of rivers and lakes where great canopies of petals spread above koi fish the size of corncobs. During sakura, families and other groups, from workplaces or social clubs, assemble to celebrate a tradition known as hanami: flower-viewing picnics. These picnics first flourished in the Heian period, and are featured in the 11th-century courtly novel The Tale of Genji. When the hanami are in full swing, it can seem as if Ueno Park — one of Tokyo’s most popular locations for the celebration — has become the staging ground for a hundred small re‑enactments of scenes from A Midsummer Night’s Dream. Young women in short crinolines, chalk tights and dark Rococo-era dresses dart between the trees like insistent fairies. Some carry lace umbrellas (‘Goth Lolita Wear’ occupies a whole floor in a nearby department store). Junior wage-earners, sent to stake a patch for their superiors, set to dreaming in intimately vulnerable postures; their arms and legs flung out to indicate an intention to occupy more space. Paired shoes in a row belong to no one nearby. Nihonshu (saké) turns cheeks ruddy and friends garrulous. Why talk of these common and decorative trees, when icebergs, polar bears, a dozen small types of amphibian and old-growth forests face the real possibility of extinction? As the sun sets, the mood of enchantment reveals other themes: metamorphosis and attraction. Flash cameras twig at the edge of perception all through the night. The shots later come to colonise social media — sakura, stark and fibrous against the black sky. The flowers are extended electronically, long after they have withered, dropped, and ceased to be. Gazing into the throats of flowers is surely one of the most trite, and universal, acts of environmental appreciation. From hand-picked posies displayed on a mantelpiece to the questing of the German Romantics for the impossible blue flower — a symbol of inspiration for the 18th-century poet Novalis — flowers induce an apparently effortless contemplation of aesthetic beauty in nature. Yet, for all the stock wonder of cherries crowned in blossom, contemporary Western environmentalism has an uneasy relationship with notions of the beautiful. Political environmentalism has learnt to take a functional view of nature, turning a blind eye to cultural values such as beauty and to aesthetic practices such as hanami. In striving to establish an impartial, globally consistent means of gauging nature’s value, local forms of environmental imagination have been relegated to the work of poets. Nature is viewed as systemic and quantifiable, neither mysterious nor resplendent. In an overburdened world, this is how we have come to debate the comparative significance of habitats and organisms: as ecosystem services. Perhaps, for environmental thought to be accepted in the political mainstream, it was always necessary to discard the drippy spiritualism of a former age and embrace the numbers game. Yet, something important has been lost in the exchange. Sidelining the environmental imagination — particularly its manifold local variations in different cultures — has narrowed the green movement. Better science, accountancy and leadership might well be essential to confronting the realities of our current environmental crises, but without developing a way to talk about the unreal aspects of our environmental relationships and our imagined attachments to natural phenomena, progress will only ever be tenuous. Ancient as it is, the Japanese tradition of sakura offers germane insight into this very contemporary problem. Detail from Kumoi Cherry Trees (Kumoi sakura) by Yoshida Hiroshi, 1926. Courtesy Wikemedia CommonsToday, many environmentalists feel squeamish about conservation campaigns that generate sympathy by using sublime imagery of ancient forests or charismatic animals such as pandas. There are good reasons for this turn. Mere aesthetics can no longer form the basis of a wide moral responsibility to the environment: beauty is too reliant on the cultural and historical values of human societies. In the past, environmentalists have failed to appreciate the ecological significance of subjectively ugly, or simply less visible, life forms, processes and biospheres. Contemplation was once the foundation of conservation, which consisted largely of the perpetual, and often expensive, stewardship of wild and beautiful places. Now we recognise that the inhabitable future is much more likely to turn on (invisible) atmospheric climate change, intimately woven into human economic behaviour and the atmospheric chemistry of the globe. Meanwhile, the ecological importance of places once deemed ugly, such as ‘foul’ swamplands, is well-established. We have come to consider unbeautiful nature deserving of attention (and potentially, of preservation) and the process has dialled up scepticism about ‘natural’ beauty more generally. The bouquet of cut flowers gives away nothing of the stooped labour required to cultivate it, nor the agrochemicals that prolong unblemished blooms, the emissions generated by transportation and refrigeration, or the species supplanted by commercial hothouses. The sublime landscape, emptied of people, does not tell of the evictions required to create a nature reserve. To equate the beautiful with the good is to disregard how beautiful things come to be. According to its critics, this is beauty’s insidious underside: how beauty is normalised as apolitical and even trivial, when in fact it is neither. What grounds most modern, mainstream green movements is not an investment in environmental mythos, natural majesty or animistic re-enchantment with beautiful sites and life forms. Instead, appeals are made to the ecological mindset — to the way that we think of ourselves as integrated in material systems of natural objects and habitats. The ethics of caring for a tree, then, does not rest on an idea of that tree as beautiful, singular, or symbolically charged with local meaning. Instead, a tree’s value is pegged to its functionality in a biome: as the ‘lungs’ of the world, or a dwelling place for endangered species; as a means of erosion control, a limitation on urban sprawl, carbon storage, or a watershed service. Which is to say, the popular criteria for care are scientific and quantifiable. Known as natural capital, such criteria have little to do with any loose consensus about the social history of trees (and human/tree relationships) in the places that they grow. Petitions to beauty are still made, of course, but now they invoke economics. Trees are valuable because they generate tourism or trade; connoisseurs of natural beauty are recast as financial concerns. Such people include hikers and sightseers, but also those who buy property (‘tree-changers’), and anyone who has read The Lorax by Dr Seuss to a child. Trees matter, in this context, not because they are beautiful but because certain people think they are — and those people will directly, or indirectly, contribute to the value of those trees, so long as they are not replaced by, say, car parks. Under this model, obligations to trees exist because of how other things depend upon them — the air, for instance, certain animals, the soil, or people’s economic livelihoods. Ethical regard is dependent on apprehending connectedness. So the failure to act decisively on climate change is often attributed to a failure to perceive connectedness — the connectedness of world CO2 levels to the burning of forests, for example. These material relationships are a powerful place to begin developing green consciousness. And yet: over the coming decades it will become increasingly important to link together not just physical phenomena, but to include imaginary ones as well. Trees in the present — which exist as tangible things, photosynthesising, dropping flowers, casting shade — are coupled to the visions we have of the future, its climate, and the people who will live their lives integrated into that climate. A tree’s significance is not just environmental; it is a device of the speculative imagination. En masse, the value of trees relates directly to the premium we place on our capacity to predict the future with accuracy. The fewer trees there are, the more wildly our versions of the future are likely to veer from the reality, because a world with fewer trees is likely to experience more severe climate change. Climate scientists corroborate: the hotter world will increase in volatility, until only novelists can confidently venture their versions of the time ahead. This kind of thinking, which starts in environmental imagination, paradoxically holds that our ethical obligations aren’t just to trees, or to an abstract standard of nature in the future — we are also obliged to imagine the societies of descendants who will populate that future. It is this potential to expand the ambit of connectedness beyond physical relationships, to what might be considered metaphysical relationships, that many Western environmentalists have relinquished. To the credulous outsider, sakura is primarily an aesthetic experience. The blossoms are utterly captivating — as is observing the glee of other people, who pose and gaze amid the trees. The stereotype is as seductive as it is condescending: here is an innocence, we might believe, now lost to Western environmentalists, delighting in the seasons and seasonal change. The enjoyment of sakura is that of being reacquainted with the thrill of meticulous observation. How often does the opportunity to dwell on simply looking at trees present itself to the ordinary urbanite? No one thinks it peculiar in Ueno Park to bend a switch of blossom and inhale its faint perfume. For hard-headed environmentalists, sakura might be seen as nothing more than the mawkish appreciation of natural beauty. Why talk of these common and decorative trees, when icebergs, polar bears, a dozen small types of amphibian and old-growth forests face the real possibility of extinction, in our lifetime? One might sooner look to ikebana, the Japanese custom of flower-arranging, or even bonsai — trees made miniature with tourniquets and topiary — to garner knowledge about how Japanese people engage with nature. There, at least, we see evidence of an active engagement with natural objects, and an attempt to use botany as a study of environmental relationships. Sakura, on the other hand, seems at once too passive and too sentimental to sustain fruitful analysis. But to interpret sakura and the conventions of hanami as mere acts of aesthetic indulgence is to miss the full significance of the season. Sakura flowering is brief. To the Japanese people, the cherry-blossom season is properly understood as a contemplation of transience in human life. To celebrate sakura is to mark the fleet-footed passage of time: from the traditions of flower-viewing begun in Japan’s classical history to the embodied time of personal histories and, inevitably, individual mortality. As Motojirō Kajii exclaims in his story ‘Under the Cherry Trees’ (1928): Dead bodies are buried under the sakura! You have to believe it. Otherwise, you couldn’t possibly explain the beauty of the sakura blossoms. I was restless, lately, because I couldn’t believe in this beauty. But I have now finally understood: dead bodies are buried under the cherry trees.Beauty, as always, is in step with the death drive. Ultimately, it is this aspect of the sakura tradition that yokes environmental imagination to concepts of deep time, past and future. For, of course, sakura is an acknowledgement that we are each others’ environment; that our communal relationships with each other, with our social pasts and futures, determine how we value the world of trees. Sakura reinforces a set of environmental values in which people — and people’s ethical regard both for one another and for other natural objects — are central. After all, human beings were always ‘natural objects’, despite environmentalism’s focus on wild spaces, creatures and trees. In an age of systemic climate change, the grandeur and mystery of all that we call ‘nature’ is indelibly tinged by human presence. Sakura is founded on an analogous commingling of environmental and cultural information — yet it resists hubris, melancholia or sentimentality. Leaping from branch to branch in Ueno Park are jungle crows — a heavy, large-billed Asian species of corvid — whose burgeoning population caused Tokyo’s governor in 2009 to call for crow-meat pies to become the city’s special dish. The birds shake down showers of blossom and leave the boughs bare. The season is over. Sakura’s meditations on human transience bond the Japanese people tightly to those future generations whose claim on the experience of cherry trees will be not just seasonal, but perpetual. Behind the beauteous enjoyment of these flowers lies an appreciation of environmental imagination we would do well to recoup. | Rebecca Giggs | https://aeon.co//essays/whats-wrong-with-japans-cherry-blossom-picnics | |
Information and communication | More human beings can write and type their every thought than ever before. Something to celebrate or deplore? | At some point in the past two million years, give or take half a million, the genus of great apes that would become modern humans crossed a unique threshold. Across unknowable reaches of time, they developed a communication system able to describe not only the world, but the inner lives of its speakers. They ascended — or fell, depending on your preferred metaphor — into language. The vast bulk of that story is silence. Indeed, darkness and silence are the defining norms of human history. The earliest known writing probably emerged in southern Mesopotamia around 5,000 years ago but, for most of recorded history, reading and writing remained among the most elite human activities: the province of monarchs, priests and nobles who reserved for themselves the privilege of lasting words. Mass literacy is a phenomenon of the past few centuries, and one that has reached the majority of the world’s adult population only within the past 75 years. In 1950, UNESCO estimated that 44 per cent of the people in the world aged 15 and over were illiterate; by 2012, that proportion had reduced to just 16 per cent, despite the trebling of the global population between those dates. However, while the full effects of this revolution continue to unfold, we find ourselves in the throes of another whose statistics are still more accelerated. In the past few decades, more than six billion mobile phones and two billion internet-connected computers have come into the world. As a result of this, for the first time ever we live not only in an era of mass literacy, but also — thanks to the act of typing onto screens — in one of mass participation in written culture. As a medium, electronic screens possess infinite capacities and instant interconnections, turning words into a new kind of active agent in the world. The 21st century is a truly hypertextual arena (hyper from ancient Greek meaning ‘over, beyond, overmuch, above measure’). Digital words are interconnected by active links, as they never have and never could be on the physical page. They are, however, also above measure in their supply, their distribution, and in the stories that they tell. Just look at the ways in which most of us, every day, use computers, mobile phones, websites, email and social networks. Vast volumes of mixed media surround us, from music to games and videos. Yet almost all of our online actions still begin and end with writing: text messages, status updates, typed search queries, comments and responses, screens packed with verbal exchanges and, underpinning it all, countless billions of words. This sheer quantity is in itself something new. All future histories of modern language will be written from a position of explicit and overwhelming information — a story not of darkness and silence but of data, and of the verbal outpourings of billions of lives. Where once words were written by the literate few on behalf of the many, now every phone and computer user is an author of some kind. And — separated from human voices — the tasks to which typed language, or visual language, is being put are steadily multiplying. Consider the story of one of the information age’s minor icons, the emoticon. In 1982, at Carnegie Mellon University, a group of researchers were using an online bulletin board to discuss the hypothetical fate of a drop of mercury left on the floor of an elevator if its cable snapped. The scenario prompted a humorous response from one participant — ‘WARNING! Because of a recent physics experiment, the leftmost elevator has been contaminated with mercury. There is also some slight fire damage’ — followed by a note from someone else that, to a casual reader who hadn’t been following the thread, this comment might seem alarming (‘yelling fire in a crowded theatre is bad news… so are jokes on day-old comments’). Participants thus began to suggest symbols that could be added to a post intended as a joke, ranging from per cent signs to ampersands and hashtags. The clear winner came from the computer scientist Scott Fahlman, who proposed a smiley face drawn with three punctuation marks to denote a joke :-). Fahlman also typed a matching sad face :-( to suggest seriousness, accompanied by the prophetic note that ‘it is probably more economical to mark things that are NOT jokes, given current trends’. Within months, dozens of smiley variants were creeping across the early internet: a kind of proto-virality that has led some to label emoticons the ‘first online meme’. What Fahlman and his colleagues had also enshrined was a central fact of online communication: in an interactive medium, consequences rebound and multiply in unforeseen ways, while miscommunication will often become the rule rather than the exception. Three decades later, we’re faced with the logical conclusion of this trend: an appeal at the High Court in London last year against the conviction of a man for a ‘message of menacing character’ on Twitter. In January 2010, Paul Chambers, 28, had tweeted his frustration at the closure of an airport near Doncaster due to snow: ‘Crap! Robin Hood Airport is closed. You’ve got a week and a bit to get your shit together, otherwise I’m blowing the airport sky high!!’ Chambers had said he never thought anyone would take his ‘silly joke’ seriously. And in his judgment on the ‘Twitter joke trial’, the Lord Chief Justice said that — despite the omission of a smiley emoticon — the tweet in question did not constitute a credible threat: ‘although it purports to address “you”, meaning those responsible for the airport, it was not sent to anyone at the airport or anyone responsible for airport security… the language and punctuation are inconsistent with the writer intending it to be or to be taken as a serious warning’. The phrase a ‘victory for common sense’ was widely used by supporters of the charged man, such as the comedians Stephen Fry and Al Murray. As the judge also noted, Twitter itself represents ‘no more and no less than conversation without speech’: an interaction as spontaneous and layered with contingent meanings as face-to-face communication, but possessing the permanence of writing and the reach of broadcasting. It’s an observation that speaks to a central contemporary fact. Our screens are in increasingly direct competition with spoken words themselves — and with traditional conceptions of our relationship with language. Who would have thought, 30 years ago, that a text message of 160 characters or fewer, sent between mobile phones, would become one of the defining communications technologies of the early 21st century; or that one of its natural successors would be a tweet some 20 characters shorter? Yet this bare textual minimum has proved to be the perfect match to an age of information suffusion: a manageable space that conceals as much as it reveals. Small wonder that the average American teenager now sends and receives around 3,000 text messages a month — or that, as the MIT professor Sherry Turkle reports in her book Alone Together (2011), crafting the perfect kind of flirtatious message is so serious a skill that some teens will outsource it to the most eloquent of their peers. Almost without our noticing, we weave worlds from these snapshots, until an illusion of unbroken narrative emerges It’s not just texting, of course. In Asia, so-called ‘chat apps’ are re-enacting many millions of times each day the kind of exchanges that began on bulletin boards in the 1980s, complete not only with animated emoticons but with integrated access to games, online marketplaces, and even video calls. Phone calls, though, are a degree of self-exposure too much for most everyday communications. According to the article ‘On the Death of the Phone Call’ by Clive Thompson, published in Wired magazine in 2010, ‘the average number of mobile phone calls we make is dropping every year… And our calls are getting shorter: in 2005 they averaged three minutes in length; now they’re almost half that.’ Safe behind our screens, we let type do our talking for us — and leave others to conjure our lives by reading between the lines. Yet written communication doesn’t necessarily mean safer communication. All interactions, be they spoken or written, are to some degree performative: a negotiation of roles and references. Onscreen words are a special species of self-presentation — a form of storytelling in which the very idea of ‘us’ is a fiction crafted letter by letter. Such are our linguistic gifts that a few sentences can conjure the story of a life: a status update, an email, a few text messages. Almost without our noticing, we weave worlds from these snapshots, until an illusion of unbroken narrative emerges from a handful of paragraphs. Behind this illusion lurks another layer of belief: that we can control these second selves. Yet, ironically, control is one of the first things our eloquence sacrifices. As authors and politicians have long known, the afterlife of our words belongs to the world — and what it chooses to make of them has little to do with our own assumptions. In many ways, mass articulacy is a crisis of originality. Something always implicit has become ever more starkly explicit: that words and ideas do not belong only to us, but play out without larger currents of human feeling. There is no such thing as a private language. We speak in order to be heard, we write in order to be read. But words also speak through us and, sometimes, are as much a dissolution as an assertion of our identity. In his essay ‘Writing: or, the Pattern Between People’ (1932), W H Auden touched on the paradoxical relationship between the flow of written words and their ability to satisfy those using them: Since the underlying reason for writing is to bridge the gulf between one person and another, as the sense of loneliness increases, more and more books are written by more and more people, most of them with little or no talent. Forests are cut down, rivers of ink absorbed, but the lust to write is still unsatisfied.Onscreen, today’s torrents of pixels exceed anything Auden could have imagined. Yet the hyper-verbal loneliness he evoked feels peculiarly contemporary. Increasingly, we interweave our actions and our rolling digital accounts of ourselves: curators and narrators of our life stories, with a matching move from internal to external monologue. It’s a realm of elaborate shows in which status is hugely significant — and one in which articulacy itself risks turning into a game, with attention and impact (retweets, likes) held up as the supreme virtues of self-expression. Consider the particular phenomenon known as binary or ‘reversible language’ that now proliferates online. It might sound obscure, but the pairings it entails are central to most modern metrics of measured attention, influence and interconnection: to ‘like’ and to ‘unlike’, to ‘favourite’ and to ‘unfavourite’; to ‘follow’ and ‘unfollow’; to ‘friend’ and ‘unfriend’; or simply to ‘click’ or ‘unclick’ the onscreen boxes enabling all of the above. Ours is the first epoch of the articulate crowd, the smart mob: of words and deeds fused into ceaseless feedback Like the systems of organisation underpinning it, such language promises a clean and quantifiable recasting of self-expression and relationships. At every stage, both you and your audience have precise access to a measure of reception: the number of likes a link has received, the number of followers endorsing a tweeter, the items ticked or unticked to populate your profile with a galaxy of preferences. What’s on offer is a kind of perpetual present, in which everything can always be exactly the way you want it to be (provided you feel one of two ways). Everything can be undone instantly and effortlessly, then done again at will, while the machinery itself can be shut down, logged off or ignored. Like the author oscillating between Ctrl-Y (redo) and Ctrl-Z (undo) on a keyboard, a hundred indecisions, visions and revisions are permitted — if desired — and all will remain unseen. There is no need, ever, for any conversation to end. Even the most ephemeral online act leaves its mark. Data only accumulates. Little that is online is ever forgotten or erased, while the business of search and social recommendation funnels our words into a perpetual popularity contest. Every act of selection and interconnection is another reinforcement. If you can’t find something online, it’s often because you lack the right words. And there’s a deliciously circular logic to all this, whereby what’s ‘right’ means only what displays the best search results — just as what you yourself are ‘like’ is defined by the boxes you’ve ticked. It’s a grand game with the most glittering prizes of all at stake: connection, recognition, self-expression, discovery. The internet’s countless servers and services are the perfect riposte to history: an eternally unfinished collaboration, pooling the words of many millions; a final refuge from darkness. There’s much to celebrate in this profligate democracy, and its overthrow of articulate monopolies. The self-dramatising ingenuity behind even three letters such as ‘LOL’ is a testament to our capacity for making the most constricted verbal arenas our own, while to watch events unfold through the fractal lens of social media is a unique contemporary privilege. Ours is the first epoch of the articulate crowd, the smart mob: of words and deeds fused into ceaseless feedback. Yet language is a bewitchment that can overturn itself — and can, like all our creations, convince us there is nothing beyond it. In an era when the gulf between words and world has never been easier to overlook, it’s essential to keep alive a sense of ourselves as distinct from the cascade of self-expression; to push back against the torrents of articulacy flowing past and through us. For the philosopher John Gray, writing in The Silence of Animals (2013), the struggle with words and meanings is sometimes simply a distraction: Philosophers will say that humans can never be silent because the mind is made of words. For these half-witted logicians, silence is no more than a word. To overcome language by means of language is obviously impossible. Turning within, you will find only words and images that are parts of yourself. But if you turn outside yourself — to the birds and animals and the quickly changing places where they live — you may hear something beyond words.Gray’s dismissal of ‘half-witted logicians’ might be a sober tonic, yet it’s something I find extraordinarily hopeful — an exit from the despairing circularity that expects our creations either to damn or to save us. If we cannot speak ourselves into being, we cannot speak ourselves out of being either. We are, in another fine philosophical phrase, condemned to be free. And this freedom is not contingent on eloquence, no matter how desperately we might wish that words alone could negotiate the world on our behalf. | Tom Chatfield | https://aeon.co//essays/the-world-is-awash-with-more-text-than-ever-before | |
Consciousness and altered states | The peculiar vividness of the world becomes clear when we slow down and attend, learning to see all things anew | It starts with the slightly awkward heave — leg up and over the seat, feet locating the stirrups — and the indrawn breath that says ‘Let’s go.’ This is a new discipline for me, this stationary bike, and I make sure to pace myself. I tip from side to side, easily and rhythmically, with a hint of a pulse, my movements mechanical at first, each slight shift of the vista in front of me tied to the downstroke of my foot on the pedal. After a while it becomes mildly hypnotic, not that I recognise this, though at some point I do register that time has blurred, that two or more minutes have clicked off on the digital counter without my noticing — I’ve been too caught up in whatever is piping through the wire in my ear, or gotten completely fixated on something I’m looking at through one or the other of the two windows. And what do I see out there? Not much. Everything. Looking is oddly different on the stationary bike. Before I sat on this machine, before the business with the hip, I walked. All the time, miles every day, and it was like I had my looking with me on a leash. That was why I walked, a big part of it anyway. I loved the feeling of the moving eye. The neighbourhood streets were mostly always the same, so I used to pretend my gaze was a lens fixed on a rolling cart, a camera dolly. I would try to walk as evenly as I could so that I could film everything I was passing. And this, for some reason, allowed me to see it differently, put things into a new perspective. It’s similar to that other game I like to play. Make a box shape with both hands using thumb and index fingers. Look through, click. There in the little box — or the walking Steadicam — is what you normally see, along with the idea of seeing what you normally see. Which makes it completely different. And this, I’m finding, is what happens when I get myself up on the seat and start to pedal. How to think about this? It has to do with a certain boredom, a basic sameness endured twice a day for 20 minutes. I have the two upstairs windows, one peering down on the street below, a few spindly trees, a utility pole with wires, the visible parts of other people’s houses. The other window faces our neighbours’ house, into their bedroom window, through which I can see the slightly illuminated rectangle of the far window and, through this, the blurry shape of the next house. A clear line of sight. I get no privileged glimpses of domesticity, though the bedroom is being used by our neighbours’ grown daughter. Sometimes when I ride I can see her shadowy shape cross through the light. What might she be doing there, I wonder? There is so much time to work up hypotheses when you are spinning pedals round and round, waiting for the time to be up. The sameness, yes. The sameness of the outer view, and then the sameness of what’s right here in front of me. I’ve put the bike in my son’s bedroom, in front of his desk. He’s away now for college and the room is just as he left it. That’s why I’ve put the bike here — to break the spell of that. I plant myself right in the midst. Tick-tock and wobble. The noise of the pedals makes it seem like I am an engine that’s running itself, an engine driving these jogs of thinking, these stretches of looking, all this thinking about looking. Open your eyes, I tell myself. Bear down so hard that you forget you are looking, and then let the thing, whatever it is, come at you. I tilt this way and that. I am thinking of nothing, aware only of what feels like a rim of faint blurring all around the edges of my seeing. I don’t know how long I go like this, pedalling, listening as in a dream to the whirring of the spokes, the scratchy hiss of my jeans. But at some point I catch myself studying the tree with its bare branches reaching in toward the window, and the hedge down beside it, crusted with old snow turning purple in the evening light, and then I see how the pavement cracks and buckles just beyond. Things could not be more beautiful. How could they? What would I add or change? What could improve this desk right here in front of me, with its small pile of books, the folded-over sheet of newspaper, and that most curious oblong, that thing that looks for all the world like a dragonfly that has fixed itself there. Each point, I think, is a centre around which a world can be drawn. It’s all about attention, I decide. Attention. On the street, in the spot where the pavement dips, a puddle filled with sky. Gray, blue, perfect. How have I been sitting here all this time, looking this way and that, and not seen that glowing patch of changing light? No end to looking, I think, as the room tips lightly from side to side. The world might be, as Ludwig Wittgenstein said, everything that is the case — but the case is bigger than it was: the number of things available for our regard has increased beyond belief To pay attention, to attend. To be present, not merely in body — it is an action of the spirit. ‘Attend my words’ means incline your spirit to my words. Heed them. A sentence is a track along which heeding is drawn. A painting is a visual path that looking follows. A musical composition does the same for listening. Art is a summoning of attention. To create it requires the highest directed focus, as does experiencing it. The French philosopher Simone Weil said: ‘Absolutely unmixed attention is prayer.’ To attend, etymologically, is to ‘stretch toward’, to seek with one’s mind and senses. Paying attention is striving toward, thus presupposing a prior wanting, an expectation. We look at a work of art and hope to meet it with our looking; we already have a notion of something to be had, gotten. Reading, at those times when reading matters, we let the words condition an expectation and move toward it. Side to side, the room lightly, steadily rocks. The aperture narrows down. What catches me here sometimes, provokes me, is the smallest thing, the most neglected thing, one that would escape anyone’s general regard — mine, too, except that for some strange reason it becomes my mission to consider it, to make it the centre of my looking. Zeroing in on that unlikely shape, that zipper-pull, that dragonfly, I feel the speed and imprecision of most of my looking. Right now there is nothing else. I fix it in the centre of my vision and I direct myself at it. And having identified it, I see it. The gun-metal-coloured tab that widens out from its hinge, with its curved bordering, this — what I can still almost persuade myself is the wing of that insect — is designed to be taken between thumb and forefinger, and then the hinge, connected to the grooved attachment that accepts the two elongated zipper ends, that hinge slides either up or down, pulling the teeth from both sides together so that they mesh. A feat of engineering, but overlooked because so small, so common, another of the innumerable things in the world that are as nothing until, for whatever reason, the need arises. How that changes things! I can imagine looking everywhere, turning the house upside down, because it is the essential thing, the coat must be worn, and Where did I see that thing, I saw it somewhere? For that one moment it is the answer to the question; it is wanted. After which, of course, it falls back out of awareness, into its former near-oblivion. As it has to. What would our lives be if we were forever paying out such regard? We can only distribute attention as we need to, on what we deem to matter most. And what we attend to gives a picture of who we are. One person pays the closest heed to details of dress and domestic furnishing, but gives little thought to animals; another person sees nothing but. And so on. In previous times, there were fewer things to make a claim on our attentiveness. The world might be, as Ludwig Wittgenstein said, everything that is the case — but the case is bigger than it was: the number of things available for our regard has increased beyond belief. No longer are there just the primary material basics, but a whole mad universe of images and signals, figments and streams of information arriving through devices, all of which affect attention itself, altering its reach and intensity. I put myself through the identical rituals every time, getting on with the same movements, adjusting the earbuds, checking the time — what a tiresome creature I am. I even think this every time. But regularity soothes the soul, and didn’t Gustave Flaubert insist that a writer had to be regular and orderly in his life, like a bourgeois, so that he can be violent and original in his work. Yes, I think, pushing into the first rotation — violent, original. Violent. Original. And soon I am spinning along, fine as you please, once again taking up my slow scan of what’s in front of me, the two windows, the desk with its rattan-backed chair, before letting myself focus in again on the things on the desk. But — and here you can envision a non-demonstrative man’s non-demonstrative double-take — the dragonfly zipper-pull is not where it was! It’s there on the desk, but all askew, at a completely different angle. If this were a film, there would be a bowing of bass strings. I fixate: just how did the thing get from point A to point B? If no one else has been here — but then, with a pang of disappointment, I remember. The cleaners, they came yesterday, the desk was obviously dusted — and now I suddenly think of Sherlock Holmes, the stories I read one after another, what it was that so intrigued me. It was precisely this: that the solution of a case, any case, without exception, would turn on the most trivial-seeming bit of business, the merest detail. As if Arthur Conan Doyle were testing to see how much could depend on how little. One boot-heel, Holmes discovers, is slightly more worn than its counterpart; a nearly microscopic shred of a certain kind of tobacco is found on the stairway; a document — or a zipper-pull, say — has been moved from one part of the desk to another, indicating, of course, the precise irrefutable sequence of events, the exact trail, and the malefactor. But indicating also — and this is the deeper thing — that nothing, nothing, can be discounted. The action of the world maps itself exactly on its surfaces. If a thing doesn’t necessarily matter in itself, it might matter because of what it shows about something else. And the dragonfly zip-pull? What is it showing me, there — here — day after day? Why am I staring at this scrap of metal instead of any one of the dozen other things in my field of vision — from the little Buddha statuettes on the dresser to my left, to the books on the desk to the rattan of the chair and its particular pattern? I can’t say for sure. Might it be the shape of the thing, the fact that it looks so much like something it’s not? I am getting off-track, even as I sit, immobilised. This detail — incidental, trivial — is just a stepping stone. What really compels, of course, is consciousness, the mind’s movement through the world. I consider what the American philosopher William James called the ‘blooming, buzzing confusion’, the swirl of undisciplined awareness — from the morning’s early action of thumb and forefinger squeezing toothpaste onto the brush; to the automatic, incremental movements of measuring water for coffee; to the search through pockets for the keys to the car, and on and on. We are many things, some of them quite noble, but we are also, so very often, mired in the moment’s particulars, and so are our perceptions. And if consciousness is to be presented credibly, it must to some good extent comprise awareness of minutiae. If you want a more exalted term for this, call it ‘phenomenology’. James Joyce, Virginia Woolf, Vladimir Nabokov all planted the flag of their aesthetic here. But whatever the art, whatever the genre, the moves must be strategised. For it happens that attention paid to large subjects is usually taken right up into their thematics. We adjust our focus. Contemplating a canvas of a magnificent panorama, or an arresting portrait, is about engaging the subject — the artist is presenting it to us as important for itself. Staring at a canvas of an apple and a curl of lemon peel inevitably becomes a consideration of perception itself. And so, with the zipper. Cycling back to my earlier thought, there is a state that precedes attention, a desire or need that makes it possible. Thinking of the ways that I look at art or listen to music, I easily distinguish between the dutiful and the avid. In front of the battle scene, the mythological set-piece, I make myself pay a certain kind of attention. I take in the shapes and colours, obey the visual indicators that guide my eye from one point to another; I know to make myself mindful of the narrative, its thematic intention. I can even experience certain satisfactions, noting and feeling the balance of elements, the accuracy of execution, the expressiveness of certain gestures and features. All of this betokens one kind of attention. But I am not at attention. I do not engage out of my own inclinations so much as obey a series of basic directives, much as when I read a novel that is solidly characterised and plotted but that, for whatever reason, does not have me in its thrall. Other works — certain paintings, novels, pieces of music — activate a completely different set of responses. When I move into the vicinity of a canvas by the 17th-century Dutch artist Jacob van Ruisdael, for example, even before I have looked, when I have seen only enough in my peripheral vision to suggest that it is one of his, I experience what feels like an inclining toward; I ready myself to attend. I feel myself heightened in a Ruisdael way — which is different than a Vermeer way or a Giacometti way. It’s as if I dilate my pupils to absorb the particular colour tones, the marks that are his way of drawing trees, the strategies he uses to create distance in his landscapes. I am looking, moving my eye from point to point, sweeping along the width and breadth of the surface, but what I am attending to is more general, deeper, and hardly requires the verification of intensive looking. The paintings I love induce reverie. With Ruisdael, it’s easy: I draw the landscape fully around me. I suck it into myself, so that I might absent myself from whatever daylight spot I occupy in whatever gallery or museum. I am tantalised by its tones, the strokes of execution, but also by its profound pastness. Not its particular century or period, simply that it is a version of a bygone world. There is a big difference between our attempting to pay attention to something and having our attention captured — arrested — by something. That capture is what interests me Here attention meets distraction or, better yet, daydreaming. They are not the same thing. One is the special curse of our age — the self diluted and thinned to a blur by all the vying signals — while the other hearkens back to childhood, seems the very emblem of the soul’s freedom. Distraction is a shearing away from focus, a lowering of intensity, whereas daydreaming — the word itself conveys immersed intensity. Associational, intransitive: the attending mind is bathed in duration. We have no sense of the clock-face; we are fully absorbed by our thoughts, images and scenarios. Daydreaming is closer to our experience of art. ‘Absolutely unmixed attention is prayer.’ I keep coming back to this — it chafes. The more so as I don’t think of myself as a believer, even as I grant that being is a mystery beyond all reason. The word ‘prayer’ — I had to look it up — has a Proto-Indo-European origin. It is a fervent plea to God; it is an expression of helplessness, a putting of oneself before a superior force; it is an expression of thanks, of gratitude, to God or an object of worship. However the action is defined, it involves a wanting or needing. Modifying Weil, I would say that attention is not a neutral focus of awareness on some object or event, but is rather an absence looking to be addressed — it is, in most basic terms, a question looking for an answer. There is a big difference between our attempting to pay attention to something and having our attention captured — arrested — by something. That capture is what interests me. Side to side, I am making my motionless way through space, listening to music, putting myself into a rhythmic trance of a sort, and I am taking in whatever is in front of me, registering the house opposite, the trees, the street, my son’s desk with its pile of books, looking yet again at the one-time dragonfly, the zipper pull. Even with the mystery solved, I’ve stayed attuned. I’ve been given a metaphysical nudge: I have by way of my dissociation become aware of the thingness of the thing I am looking at. When it was stripped of familiar context — a dragonfly that couldn’t be — it was, how to put it, nakedly present to me. Something of that estrangement still persists. And it infects me, for as I look up from the desk, yet again taking in the windows, trees, books, they all seem different, hanging in a clearer air — not held together by me as parts of some picture or story, but separate existing things that I am next to. And I feel then — before they fall back into the familiar — that I could just keep looking and looking. Marcel Proust wrote somewhere that love begins with looking, and the idea is suggestive. But if that’s the case, the reverse might also be: that true looking begins with love. There’s the quote that I used to repeat like a mantra to writing students, from Flaubert: ‘Anything becomes interesting if you look at it long enough.’ Again, the distinctions, the questions of priority. Is it that the looked-at thing becomes interesting, or that its intrinsic interest gradually emerges? Is the power in the negotiable thing or in the act of looking? If the latter, then the things of the world are already layered with significance, and looking is merely the action that discloses. The digital counter, marking time, marking distance, clicks off imperturbably, the one number going up as the other drops. I focus in, make some imprecise and speculative calculations, but soon enough I turn away, and — again — confront the room and windows and street and trees, everything fitted back into the old frame, the picture swinging lightly from side to side, the push of my breathing, the numbers just a minuscule eddy in the corner of my vision, and that soon displaced by something else, a new perturbation there — as if the sheerest wisp of a cloud had just blocked the sun, but coming from the window opposite. A shape, for an instant cutting off the light from the room’s back window. Eleanor, of course. Moving from one side of her room to the other, I’ve seen it a hundred times, but this once, who knows why, I suddenly get the view reversed. She looks up and notices me here. She pauses. I consider the optics, the relative positions of our separate windows vis-à-vis the day’s light, guessing whether she can see him quite clearly: her hulking neighbour, the man in his black T-shirt and jeans, sitting with his hands laced behind his back, tipping slightly from side to side. To be seen, to know or imagine ourselves the object of another’s attention — how that feels depends on so many things. In part on the nature of that attention — whether it is neutral, the waitress coming over to the table and smiling pleasantly with her pad in her hand; or irritated, as when we are blocking the intersection with our car and drivers on all sides start hitting their horns. But really the perspective, the vantage point of another, is so unnatural, so hard to hold. How readily it flips back, becomes again the I looking at the other, who might or might not know she is visible. And of course I’m always checking. Every time I get on my bike, usually right as I’m getting settled, fixing my earbuds, finding my pace, I take a glance into the window opposite, to see. This is not about voyeurism — though I won’t pretend that I’m above staring at some person who is unaware of being stared at. There is nothing more interesting than beholding the other — pretty much any other — in his or her native habitat of assumed privacy. But this is not like that. I’m only here in daylight hours, and though I am sometimes aware of the blurry shape that I know to be Eleanor, or maybe sometimes her mother, I never see anything distinct. But the awareness does make a difference. Even if the person is facing away, is known to me only as a smudge moving though the faint light — I still feel different than I do if there is no one in the room. I was lying in bed just before dawn, awake, as so often happens now — suddenly alert with the sensation of ‘This is it — this is my life!’ which usually arrives and then just vanishes, but I lay there, eyes closed, and held it I get an image in my mind. I remember being very young and being in a big European city — old streets, old buildings — with my parents, and thinking, with a child’s special pang, that if I lived in this place, here, on this street, in that great brown building, I would never again feel alone. Here I would always know myself safe, always just a few feet from other human beings. And I can still get the same feeling in certain cities, or in certain parts of cities I know. I think: ‘How could anyone living here on Commonwealth Avenue ever feel truly alone?’ I ask it even though I know that there can sometimes be no feeling lonelier than being in a room in a crowded hotel, hearing the muffled sounds of others on all sides. Eleanor’s blurry outline — there is nothing joining us, she is very likely unaware of me on the other side of two sets of windows, but the wisp of her outline affects me. I sometimes make it my focus, first just idly wondering what it is she is doing there — if she is in her window seat, what she could be reading with such absorption, assuming that she is reading, but then considering the situation more broadly: why she is living at home now, how does she fill her days (no sign of a day job) — but then, more abstractly, more existentially, who is she, what kinds of thing preoccupy this young woman whom I have watched from the time she was a baby just home from the hospital? I realise I know nothing at all about her. Nothing. I was not on my stationary bike when it came to me what I’ve really been wanting to say, though ‘came to me’ makes it sound like I’ve arrived at this — this thought or recognition — for the first time, which would not be at all true. Rather, it might be the fundamental live-with-every-day understanding of my middle age. But I know that there are insights so fundamental, so close to our core, that we walk in their vicinity seeing everything but. Not that we don’t at some level know — of course we do — but we still get a feeling of real surprise when we catch them again, and affirm to ourselves again: ‘This time I won’t forget.’ I mean attention in the larger — I want to say ultimate — sense. Attention paid to the life, to the fact of the life, to events and people, their enormous mattering — all the things that could not be more obvious when we’re brought awake, but that really do get slurred away by distraction, sometimes for long periods, so that when the feeling does come again, it seems like something that needs to be marked, sewn à la Blaise Pascal right into the lining of your coat — where you will always see it and remember. I was not on the bike when the recognition came this most recent time, though recognitions often come during these trances, when the mind is so susceptible. I was lying in bed just before dawn, awake, as so often happens now — suddenly alert with the sensation of ‘This is it — this is my life!’ which usually arrives and then just vanishes, but I lay there, eyes closed, and held it. And I knew right then that I could turn my mind to any part of my life and bring it alive. Anything: the water fountain at my first school, the feeling of walking with my friend in the pine woods near my house, bouncing up and down at the end of the diving board at Walnut Lake, waking in a tent on hard ground in a dew-soaked sleeping bag, knowing the weight of my newborn son when I held him up over my head. I could point my mind to anything in my life and have it — savour it there in the dark, even as I was telling myself that this must not be forgotten, that it absolutely has to be attended to, that my life will make sense only when every one of these things is known for what it was, or is. I think back on it now, holding myself straight, in purposeful motion, but not moving at all, staring in front of me as the world tips lightly from side to side. | Sven Birkerts | https://aeon.co//essays/paying-close-attention-is-an-art-and-an-act-of-the-spirit | |
Food and drink | We might deplore the practice, but posting pictures of our meals online is a way to bring everyone to the table | In February, I attended the wedding of a former colleague in Washington, DC. The reverend’s homily touched on the role that food had played in the bride and groom’s relationship. Their love, cultivated while sharing meals, reflects the role of food in the human experience. The reverend described a picture that the groom had taken of the scene where he had proposed: a spread of delicious meats, cheeses, and wine; the rolling hills of the Virginia countryside in the background. ‘It was a small feast on a hill to mark a rich moment shared together,’ said the reverend with a laugh. ‘I’m sure there’s a photo somewhere on Instagram.’ The congregation laughed and exchanged knowing nods. As far as I can tell, the practice of photographing one’s food — whether in restaurants or at family gatherings — is generally deplored. The New York Times Style section, in its dual role as avatar and caricature of urban mores, reported in January that restaurants in Manhattan were banning it. (‘It’s a disaster in terms of momentum, settling into the meal,’ said one chef. ‘It’s hard to build a memorable evening when flashes are flying every six minutes.’) Your friends tend to be annoyed if you cram your Facebook and Twitter feeds with snapshots of your latest delicacy, too. ‘You posted an Instagram-ed picture of a handful of blueberries the other day,’ wrote Katherine Markovich in McSweeney’s, sneering at the iPhone-toting hordes of amateur photographers. ‘What would your day have been without those blueberries? Would you have felt a little less connected to the earth and, ultimately, yourself?’ We laugh at the thought of a beautiful moment ruined by Instagram, but meals continue to fill our online lives. The internet is brimming with steak and fried eggs, kale and rice, ice cream and coffee. Food, of course, can be a sign of status, and documenting our every dinner might be a vehicle for self-expression: ‘Tell me what you eat,’ said Jean Anthelme Brillat-Savarin, the 19th-century French lawyer, politician and gastronome, ‘and I will tell you what you are’. But the exotic cuisines, fine wines and clever plating that we recognise today are all built on the simple act of dining together. Food is inherently social, best consumed with friends or family; even eating with strangers is better than eating alone. It is essential to our social life that we invite people to eat with us, even when we’re separated by space and time. While blood and religion are often regarded as the fundamental bonds of communal living, food has long been the engine for human society. As The New Yorker writer Adam Gopnik jokes in his book The Table Comes First (2011), civilisation is ‘mostly the story of how seeds, meats, and ways to cook them travel from place to place’. But mealtime, in its home at the table, is more than a moment for collective consumption of seeds and meats. It is the space in which our nature as social animals is fully disclosed. Food has accompanied virtually every communal ceremony since the dawn of civilisation, from the Sabbath to the solstice, the communion to the wake. Before the Industrial Revolution, subsistence drove and defined the evolution of social relations: the most basic distinctions between preindustrial hunter-gatherer, pastoral, agrarian, and feudal societies is how each group collectively provided nourishment for itself. Eating together is a cultural universal. Eventually, the cafés, restaurants and salons of the Enlightenment helped develop a ‘republic of eating’ where strong conversation and drink became the cornerstones of modernity. So why did food colonise the internet? The dinner table was (and still is) the primary site for family, a place where parents and children, despite their disparate schedules, reaffirm their familial bonds on a daily basis. But what is the internet if not an expansion of that table? And what more natural state for a table than to be filled with food? Can a Flickr album or series of images on Vine help to package those moments of familial renewal for absent cousins and distant children? Instagram, in fact, is only the latest online home for our food fixation. The earliest internet communities for gourmands took root years before the American blogger Jorn Barger coined the term ‘web log’ in 1999, or online communities such as Open Diary, LiveJournal and Blogger opened for business. In July 1997, Jim Leff and Bob Okumura founded Chowhound, an online discussion forum about food. One of their earliest entries, according to Saveur magazine, was a call for recommendations as the two drove down I-78: ‘I am travelling to Gettysburg from NYC (the GW Bridge) on July 4th, early AM. Is there any outstanding roadside breakfast on that route that’s open 4th of July. Or any great greasy spoon in or around Harrisburg or Gettysburg?’ As social networks expanded, they became a cornucopia of food talk. Platforms such as WordPress allowed people who weren’t trained in a programming language to publish on the wider internet; in effect, they created the participatory web, where people came not only to consume but to create and share. And food filled all of them from the beginning. The common refrain about later social media conquerors such as Facebook and Twitter was always that they were ‘where people go to share what they ate for lunch’. The latest explosion of online food sharing is driven by the particularly social nature of the modern internet. Early forums such as Chowhound were blank canvases waiting to be filled with whatever their users decided. Now, social networks such as Facebook, Twitter and Instagram specifically demand details about our lives. This is their business model, of course: Facebook makes its money by selling personal data to advertisers. But it is also deeper than that: our desire to connect and share memories is what keeps these networks growing. ‘The photograph itself, even an artily manipulated one, has become so cheap and ubiquitous that it’s no longer of much value. But the experience of sharing it is, and that’s what Facebook is in the business of encouraging us to do,’ wrote The New York Times art critic Karen Rosenberg in 2012. ‘It’s no coincidence that still lifes of food are among the most-shared photos on Instagram, along with babies, puppies and sunsets.’ There are limits, as Rosenberg hints: is a photograph of food the same as sharing a meal? Is it as authentic as a physical dinner enjoyed with friends? You can’t eat an Instagram picture. But the patterns of sharing and consumption and the values they convey are no less authentic for taking place in a new digital realm. The lives we live online and off are not separate things: they influence and inform one another. The experiences we enjoy with our friends and family can be captured and relived collectively. A montage of Instagram food photographsFiltered photos of food probably won’t replace the experience of the meal itself. The aromas and sensations of preparation and consumption, the conversations that take place at the table — these are the yeast with which we leaven our internet bread. But in modern society, where office workers often lunch at their desks, and dining alone at a public restaurant is common enough yet regarded as unsettlingly abnormal, the internet’s foodie impulses can help to preserve the social aspect of mealtime. We laugh at our Instagrammed plates and tweets about lunch, but for family and friends separated by distance and obligations, the pixilated dishes on Skype or Google+ might be a viable alternative to the kitchen table. American traditions are already moving in that direction. Instagram reported that users uploaded around 200 photos a second from 10am to 2pm during Thanksgiving Day last year, with around 10 million images bearing some kind of food tag. That was the service’s biggest day on record. The Thanksgiving meal, an experience shared by every American family regardless of creed or colour, became just as much a focus for familial relationships online as it always had been offline. As Lee Rainie, director of the Pew Research Center’s Internet and American Life project, told The New York Times, all of the strengths and weaknesses of the American family were on full display: ‘This year, more than ever before, we will see how we get along as a national family.’ Can a Flickr album or series of images on Vine help to package those moments of familial renewal for absent cousins and distant children? Adam Gopnik was inspired to call his book The Table Comes First by the British chef Fergus Henderson, who said to him: ‘I don’t understand how a young couple can begin life by buying a sofa or a television… Don’t they know the table comes first?’ On a deep level, the type of food and manner of its preparation are secondary to the context of its consumption and the company with whom its shared. The table might change, but it will always be the space in which our relationships are made. | Jared Keller | https://aeon.co//essays/why-do-we-choose-to-serve-our-dinner-on-social-media | |
History of ideas | Verbal gaffes can profoundly challenge our sense of self, offering insight into our idiosyncrasies and desires | I have been interested in Freudian slips for as long as I can remember. Where I grew up, etiquette was everything. My mother spent considerable time doing ‘meals on wheels’ for the elderly, helping local disabled youngsters, and was much admired for these virtues. She never had a cross word for anyone, and always dressed immaculately. One Christmas, she took us to a neighbour’s party who, local gossip had it, was envious of my mum. As the party drew to a close, my mother went up to the hostess and thanked her ‘for her hostility’. Despite my mother’s mortification, this small bungle meant something. Knowledge had leaked through the slip and we could all stop holding our breath for a second, and laugh. A similar reaction of unchecked laughter was the response when the presenter James Naughtie somewhat unfortunately renamed the politician Jeremy Hunt on BBC’s Radio 4 in December 2010. Naughtie spent much of the next 10 minutes in giggles, poorly masked as a cough. As is often the case, such camouflage only served to underline what was actually going on. Freudian slips often have something of the prohibited in them — a reference to a rude word or contempt. Sigmund Freud called them Fehlleistungen (literally, ‘faulty actions’) in The Psychopathology of Everyday Life (1901), though his editor favoured the term parapraxes (a minor error). For Freud, slips were almost invariably a result of an unconscious thought, wish or desire. What we want most is forbidden and therefore provokes anxiety. We make slips because a suppressed element ‘always strives to assert itself elsewhere’. Slips, like dreams, are royal roads to the unconscious: they both hide and reveal that which drives us. Despite cultural recognition, today Freud’s theories are seen as outdated and irrelevant The technique of ‘free association’ was introduced to explore these ‘errors’ of speech, memory or action. If we listen closely enough, Freud argued, ‘the accidental utterances and fancies of the patient… though striving for concealment, nevertheless intentionally betrays’ that which is suppressed. By examining the chain of associations we emphasise the extra word, the wrong word, the missing word, and ask ‘Why?’ What is being kept out of the conscious mind? Such a way of understanding the human experience has saturated the cultural world. Think of all the films — from Cruel Intentions (1999) to The Twilight Saga series — in which a geeky teenager’s clumsy awkwardness suddenly disappears after their first kiss. Scriptwriters seem to suggest that there is no longer any need to stumble, drop or fall, once repressed sexuality has been expressed. In psychoanalysis, we welcome these parapraxes: in them lies a clue to the inner world of our unconscious. Through the careful work of unpacking condensed and disguised references within slips, we can find a nexus of forgotten material and distress that can then be untangled. Despite this cultural recognition of Freudian slips, today Freud’s theories are seen as outdated and irrelevant by proponents of cognitive psychology and many in psychoanalytic circles. Cognitive psychologists argue that how we produce speech is so complicated that there are bound to be gaffes. Consider how speech occurs. We must generate the intention to relate a particular idea with a word. We formulate a pre-verbal message, part of which involves a serious competition between a number of words, before we select the most relevant ones. Then we consider the form. There needs to be grammar. We need to encode how words are uttered. Naturally, our brains use shortcuts, going for the quickest, most efficient solution, tending to pick words we have used before. All of this happens through super-quick, preconscious processes, or we’d go quite mad. Given the complexity of this process, things can go wrong. We might mix up parts of words: for example, ‘the self-destruct instruction’ can become ‘the self-instruct destruction’. Or we might anticipate part of a later word too early in a sentence — ‘the reading list’ becomes ‘a leading list’. Similarly, words gain meaning only within the organisation of a sentence. In this way, for cognitive psychologists, these gaffes are simply a misfiring of the shortcuts that brain-processing relies on. But popular culture suggest otherwise. Consider an episode of the American sitcom Friends (1998). At the altar, Ross is due to marry a woman who is not the woman, Rachel, who has haunted him for years. Though the woman in front of him is Emily, the name that leaves his lips is Rachel. The TV congregation, both women, and the entire watching audience know what this means instantly: that his true desire is elsewhere. His slip has the same dignity as Portia’s Freudian slip to Bassanio in The Merchant of Venice: ‘One half of me is yours, the other half yours.’ Desire leaks and insists through language. Much has been made of a recent study by Howard Shevrin, professor of psychology at the University of Michigan, which appeared to prove that the words relevant to an unconscious conflict are actively inhibited, or repressed, in anxious patients (cue headlines such as this in the Daily Mail last June: ‘Couldn’t come quick enough: Theory behind the Freudian slip is finally proven after 111 years, new research claims’). However, Freud had already predicted many of the critiques that would be offered by cognitive psychologists. He stressed how ‘favourable circumstances’ such as ‘exhaustion, circulatory disturbances and intoxication’ can make slips more likely. To identify these favourable circumstances as the cause of a slip would be like going to a police station and blaming the theft of one’s purse on the isolated part of the city one found oneself in, Freud argued. There must also be a thief. And the thief is a desire that tries to burst through. Psychology professionals do their patients a disservice if they focus on broad brushstrokes rather than the singular tapestry of a person’s life In certain psychoanalytic circles, a focus on the slipperiness of language has been eclipsed by a focus on the relational — a shift from the purely psychoanalytical to the psychodynamic. The focus now is on what type of relationship is repeated by the patient within the therapeutic relationship. A classic example from The Psychopathology of Everyday Life demonstrates this shift. Freud describes meeting a young man who bemoaned how useless his generation was. He tried to muster a famous Latin proverb to clinch his argument, but missed out the key word aliquis (meaning someone or something) and couldn’t recall it. Having accused Freud of gloating, he then requested an analysis of his slip. Freud instructed him to associate to the missing word, which led to the sequence: a liquid, liquefying, fluidity, fluid, relics… saint’s relics, St Simon, St Benedict, St Augustine and St Januarius. The man then identified St Januarius as both a calendar saint and one who performed the ‘miracle of the blood’. He then half-started a sentence, before cutting himself off. Freud commented on the pause, and the youngster revealed an anxiety that a certain young lady — perhaps not from the best family — might very soon have news that she had missed her period. The slip allowed the young man to make conscious a fear he had tried to repress — that he might have made this girl pregnant, and that this would bring shame upon his family. The young man began to articulate something of what bothered him for the first time. Had the young man been in a modern consulting room, the process of association might have been cut off in favour of a discussion of the young man’s transference (unconscious relationship) to Freud as an authority figure. The focus would be on a pattern of relating, as opposed to delving into his unconscious associations. A similar limitation is found in cognitive behavioural therapy (CBT), which is often employed in situations where there is a pressure to achieve the same outcomes for each patient as quickly as possible. In CBT, if a patient’s surface symptoms seem stuck, one searches for the patient’s ‘core beliefs’ about the world by getting them to reveal how they would end the sentences ‘I am…’, ‘People are…’, or ‘The world is…’ Most patients will complete the propositions with the words ‘worthless’, ‘untrustworthy’ and ‘unfair’. The problem with this formula is that the individual’s internal world risks being reduced to one not dissimilar from the bloke in the consulting room next door. Psychology professionals do their patients a disservice if they focus on broad brushstrokes rather than the detailed and singular tapestry of a person’s life — which slips help to reveal. By contrast, communications technology means that Freudian slips are increasingly unforgettable within our culture. If you Google ‘Freudian slip’, you’ll find multiple compilations of slips from politicians and celebrities. If we film celebrities for long enough, something other than the performance managed by media training, publicity agents and the celebrities’ own ideas of themselves emerges. We relish these eruptions, especially when they come from ‘the great and the good’. George H W Bush’s famous slip is a good example: ‘For seven and a half years, I’ve worked alongside President Reagan, and I’m proud to have been his partner. We’ve had triumphs. We’ve made some mistakes. We’ve had some sex — setbacks.’ Many sites include instructions to watch Bush’s chest during the replay as he ‘looks like he’s having a minor heart attack after the slip’ — an example of the glee we often find when discussing celebrity slips. Slips become great power equalisers. ‘You are not what you would have us believe you are,’ we say, laughing. Curiously, when slips are shared in cyberspace, there is nearly always a swift framing of their meaning, often with some libidinal thrill — a rush to pin down the slip to the known and certain. But this is often a way to foreclose something more enigmatic and anxiety-provoking in us. It’s why we still need a Freudian theory of the unconscious to understand the hide and seek of language. By putting a quick exclamation mark on an explanation of what a slip might mean, we negate the fact that slips open up questions rather than closing them. When patients come to therapy they often fear that once their story has been told — the ‘big events’ of their life — nothing will be left to say. Yet by exploring the ruptures in our language, there is always more to say, always more that is unknown. My mother’s slip signalled to us the underlying violent emotions that were foreclosed in our Stepford neighbourhood. It was a relief to hear the unconscious speak. Language, rather than being merely descriptive, is ultimately constitutive of our sense of self. If we allow them to be, our day-to-day verbal slips, mishearings and bungled actions can be a welcome clue to the mysterious, flawed, contradictory, crazed idiosyncrasies of our own character and history. They can challenge and change us. In locating a ‘something more’ inside us, we keep desire alive, rather than mortified in the illusion that we could ever be masters of ourselves and our image. | Jay Watts | https://aeon.co//essays/is-the-freudian-slip-still-a-road-to-the-unconscious | |
Earth science and climate | The Equator once marked the edge of the civilised world. If we put it at the centre, we might see our place in the heavens | Though he never actually crossed it, the Greek mathematician Pythagoras is sometimes credited with having first conceived of the Equator, calculating its location on the Earth’s sphere more than four centuries before the birth of Christ. Aristotle, who never stepped over it either and knew nothing about the landscape surrounding it, pictured the equatorial region as a land so hot that no one could survive there: the ‘Torrid Zone’. For the Greeks, the inhabited world to the north — what they called the oikumene — existed opposite an uncharted region called the antipodes. The two areas were cut off from one another by the Equator, an imaginary line often depicted as a ring of fire populated by mythical creatures. First created in the 7th century, the Christian orbis terrarum (circle of the Earth) maps, known for visual reasons as ‘T-and-O’ maps, included only the northern hemisphere. The T represented the Mediterranean ocean, which divided the Earth’s three continents — Asia, Africa, and Europe — each of which was populated by the descendants of one of Noah’s three sons. Jerusalem usually appeared at the centre, on the Earth’s navel (ombilicum mundi), while Paradise (the Garden of Eden) was drawn to the east in Asia and situated at the top portion of the map. The O was the Ocean surrounding the three continents; beyond that was another ring of fire. For the Catholic Church, the Equator marked the border of civilisation, beyond which no humans (at least, no followers of Christ) could exist. In The Divine Institutes (written between 303 and 311CE), the theologian Lactantius ridiculed the notion that there could be inhabitants in the antipodes ‘whose footsteps are higher than their heads’. Other authors scoffed at the idea of a place where the rain must fall up. In 748, Pope Zachary declared the idea that people could exist in the antipodes, on the ‘other side’ of the Christian world, heretical. This medieval argument was still rumbling on when Columbus first sailed southwest from Spain to the ‘Indies’ in 1492. Columbus, who had seen sub-Saharans in Portuguese ports in west Africa, disagreed with the Church: he claimed that the Torrid Zone was ‘not uninhabitable’. Although he never actually crossed the Equator, he did go beyond the borders of European maps when he inadvertently sailed to the Americas. To navigate, Columbus used, among others, the Imago Mundi (1410), a work of cosmography written by the 15th-century French theologian Pierre d’Ailly, which included one of the few T-and-O maps with north situated at the top. Columbus’s eventual ‘discovery’ of America stretched the horizons of the European mind. The Equator was gradually reimagined: no longer the extreme limit of humanity, a geographical hell on Earth, it became simply the middle of the Earth. An orbis terrarum (circle of the Earth) map, also known as a ‘T-and-O’ mapThe Equator cuts across the Molucca and Halmahera seas, the Karimata and Makassar straits, Lake Victoria and the Gulf of Tomini, and 14 countries in Africa, southeast Asia and the Americas. Of all these nations, only one named itself after the line: Ecuador. Not surprisingly, Ecuador’s tourist industry makes a big deal of the association. The Galapagos Islands might be the country’s number-one attraction, but few visitors leave without first walking the line in Ciudad Mitad del Mundo, situated high in the Andes mountains about 20 miles north of the capital, Quito. Tourists watch amazed as water swirls down through a plughole in different directions depending on which side of the Equator the sink is placed The Intiñan Museum in Ciudad Mitad del Mundo is designed to educate tourists about the wonders of Ecuador. It boasts a scale model of the Galapagos Islands, installed in a fountain, and several displays about the indigenous peoples of the Amazon, including a series of paintings that illustrate how to shrink a head. However, none of these explain why busloads of tourists flock to the place. In fact, they come to witness firsthand the ‘unique forces at play’ on the Equator, which is indicated by a red line that cuts through the middle of the museum. Tourists jump gleefully from one side to another, attempt to stand an egg on the head of a nail (they receive a certificate if they succeed), and watch amazed as water swirls down through a plughole in different directions depending on which side of the Equator the sink is placed. Although it is usually defined as an ‘imaginary’ line, the Equator is indeed marked by the occurrence of unusual phenomena — though not very precisely marked, and not by the sorts of phenomena advertised at the Intiñan. The painted stripe that wends through the museum misses the actual Equator by several metres. What ‘unique forces’ are at play, then? The velocity of the Earth’s rotation varies depending on where you stand: 1,000 mph at the Equator versus almost zero at the poles. That means that the fastest sunrises and sunsets on the planet occur on the Equator, and centrifugal and inertial forces are also much greater there. Together, they produce what is known as the Coriolis effect, which largely determines the direction of weather systems, ocean currents, the east-west path of hurricanes, and the fact that tornados spin in opposite directions on each side of the Equator (it is not enough, however, to alter the equilibrium of eggs on a nail or the spiral of a gallon of water in a sink). Centrifugal and inertial forces affect the relative motion of all objects that lift off sufficiently far from the Earth, from cannonballs to missiles. Spacecraft are launched from sites close to the Equator, such as the Space Centre in French Guiana: they are already moving faster than objects elsewhere on the Earth, and the extra velocity reduces the amount of fuel needed to enter space. Because of these same centrifugal forces, the Earth’s diameter at the Equator is approximately 27 miles (43 km) greater than from pole to pole. Instead of a sphere, our planet is shaped like an M&M (or, as New Scientist claimed in 2011, like a lumpy potato). The extra distance from the Earth’s core means that gravity is weaker at the Equator: about 0.6 per cent weaker than at the poles. And the equatorial bulge means that the Earth’s highest point, when measured by the distance from its core (rather than sea level), is not the peak of Mount Everest but that of Mount Chimborazo in Ecuador. The countries along the Equator are dotted with monuments that mark its location, including a large rock placed near a river in the Democratic Republic of Congo by Henry Morton Stanley, the Welsh-American explorer famous for supposedly uttering the insipid phrase ‘Dr Livingstone, I presume’. But the world’s largest structure commemorating this imaginary line, a 100ft-high monolith with a five-ton globe resting on top, is in Ecuador, a mere 500ft from the Intiñan Museum. Built in 1979, the monument gave rise to a tiny, imitation Spanish colonial town. No one actually lives there: it closes at sundown and is full of gift shops, mostly selling miniature replicas of the monument. Although it is marked by a long yellow line and has the latitude 0º 0’ 0’ embossed on its side, Ecuador’s Equator monument is dedicated less to the Equator itself than to a team of French scientists who came here in the 18th century to carry out geodesic observations. Two scientific teams set out from France in the 1730s, one heading to Lapland near the North Pole and the other to the Viceroyalty of Peru, to measure the length of one degree of latitude. Being able to compare the length of a single degree of latitude at the Equator to one at the poles would help to determine the exact size of the Earth, create more accurate maps and, more importantly, would finally settle an ongoing debate. French scientists of the time were convinced that the Earth swelled at the poles, while English scientists, including Sir Isaac Newton, believed that it was the Equator that bulged. The team’s surgeon was killed by an enraged mob in Cuenca during a running-of-the-bulls celebration, and the draftsman got sick and died The French Geodesic Mission was the first major scientific incursion into South America. The Viceroyalty of Peru was selected because there the Equator was close to a city (Quito), and was situated between two major mountain ranges running north to south, thus offering a perfect panorama in which to carry out geodesic calculations. The team consisted of ten renowned French scientists, led by an astronomer, a mathematician and a geographer. In order to gain permission to conduct their experiments in a Spanish colony, the French scientists had to bring two Spanish naval officers, notionally with specialisations in geography, though in fact they were spies. In 1736, the team arrived in Quito, where a local scientist and mapmaker made up the numbers. The party set up camp about 10 miles north of the city, in the Andes mountains. Professional relations broke down almost from the start. The French scientists gave the Spaniards a cold shoulder. The team was poorly equipped for the altitude and weather conditions, and sickness spread quickly. Indians living in the mountains, afraid that the Europeans were dividing up the land among themselves, pulled up the stakes that the scientists were using as markers to calculate distances. Local officials accused the team of trying to steal Incan artifacts, almost running them out of town. The team’s surgeon was killed by an enraged mob in Cuenca during a running-of-the-bulls celebration, and the draftsman got sick and died. Five years before the Equatorial team could complete its work, the Polar expedition finished measuring one degree of latitude, conclusively proving that the Earth bulged at the Equator and receiving applause from the European scientific community. The French Geodesic Mission on the Equator might not have contributed much to the advancement of science, but its presence, and the scientific methods and ideas that it brought to the Americas, were to have a profound influence. Along with the latest gadgets, the French brought the spirit of Enlightenment, a scientific worldview that would eventually lead to revolution in France and independence in the New World. For centuries, the land in which the scientists found themselves had been known as the República de Quito. After declaring independence from Spain in 1830, this newly created country chose a new name in large part inspired by its enlightened visitors, who had always referred to it as Tierra del Ecuador, the land of the Equator. Before leaving the country, the French erected a pyramid-like monument to the mission on the site where they had first drawn the line of the Equator. They neglected to include the names of their two Spanish officers, prompting the Spanish Crown to order the destruction of the French fleur-de-lis on its peak. And then the monument was left to crumble over the years. In 1936, to mark the 200th anniversary of the Geodesic Mission’s arrival, a stone monument topped by a brass globe was erected in San Antonio de Pichincha. Almost 50 years later, this monument was moved to a nearby town, replaced by the 100ft stone version that stands today, flanked by busts of all the members (Spaniards included) of the French Geodesic Mission. Both the giant monument and the museum dedicated to the Equator missed the actual line by a wide margin. Yet many pre-Hispanic constructions stand directly on it. European astronomers living far from the Equator might have calculated its location scientifically (though always with a significant margin of error), but long before that, those living in the region seem to have pinpointed the exact location without the need for scientific devices. The Incas, based in southern Cuzco, came north to invade the equatorial region in the late 15th century and, once they had dominated the local indigenous cultures, they set to work in the mountains building a series of pucaráes, huge circular structures made from stones. Ecuadorian and international anthropologists believe that these constructions, the most important Incan remains within the equatorial region, were military fortifications. Not everyone, however, is convinced. Cobo on the line of the Equator at the giant Quitsato sundial. Photo by Kurt HollanderCristóbal Cobo, a deep-voiced outdoorsman in his late 40s, used to make frequent visits from his native Quito to the mountain range 10 miles to the north to go hang-gliding. His solo flights gave him a bird’s-eye-view of the area, while his use of GPS technology, GoogleEarth and Stellarium helped him to track the line of the Equator throughout the region. After mapping out the known indigenous constructions in the area, he began to use AutoCAD and other sophisticated 3D computer-imaging programs to project lines from Catequilla, the site of a large circular structure, out into the surrounding hills. This led to the discovery of several more archaeological sites. According to a map that Cobo created, Catequilla is the centre of a series of 13 pre-Hispanic constructions, all aligned along the principal geographical and celestial lines and thus all in perfect geometrical relation to the Equator. His maps led to a bloated sense of self for the northern countries, located at the top of the map, while diminishing the southern hemisphere’s sense of size and importance A self-taught Ecuadorian astronomer, anthropologist and geographer, Cobo came to the conclusion that these circular constructions were not forts (they were too far from any urban centres to offer much protection), but evidence of a more celestial purpose. The Incas were aware of the existence of the Equator from the reports of travellers who had seen their shadows disappear during the equinoxes. More than just imperial warmongers, the Incas were also children of the Sun (their principal deity) and avid stargazers. According to Cobo, the several pucaráes located on the Equator, in line with the major celestial bodies, were most likely observatories from which to chart the stars and the movement of the Sun. The Catequilla construction is a stone wall, 1.8 metres high, forming an arc approximately 70 metres in diameter. It stands on the only elevated plain located directly on the Equator, affording it an unobstructed 360-degree view. From this unique vantage point, the southern and northern constellations and all of the most important archeological sites in the region are visible with the naked eye. Cobo believes that the arc in Catequilla was constructed in line with the path of the Sun above the Equator: one end of the wall receives the sunrise during the winter solstice and the other end catches the sunset during the summer solstice. As an outgrowth of these ideas, Cobo recently created the Quitsato Project, ‘a multidisciplinary study in archeology and astronomy designed to realise the correct interpretation of the meaning and function of the pre-Hispanic cultural contexts that exist in the equatorial Andes’. As part of his project, Cobo has created a giant sundial, the only man-made object on the Equator that can be seen from space. It is located exactly on the line, a short drive from Cobo’s home in the Hacienda de Guachalá (which happens to be where the French Geodesic Mission stayed in 1736). The sundial’s gnomon is a giant vertical tube. Patterns of white rock radiate away from the centre, serving as calendar, clock and compass. For just a few seconds twice a year, the sun, directly overhead at noon during the equinoxes, illuminates a mirror at the bottom of the giant tube. Although Cobo uses the latest satellite tracking devices and sophisticated computer programs to chart his maps of indigenous astro-architecture, he is wary of the European scientific methods and worldviews that have accompanied colonialism in its spread across the globe. The French Geodesic Mission, armed with what was then the latest gadgets and theories, not only failed to calculate the location of the Equator accurately: its enlightened culture based on science eventually gave rise to ever more efficient systems for exploiting the New World. In particular, Cobo has problems with the direction that mapmaking has taken. In 150AD, Ptolemy drew the first world map with north placed firmly at the top. This orientation has become the standard one for maps everywhere. The preeminence of north derives from the use of Polaris, also known as the North Star, as the guiding light for sailors. Yet Polaris, or any other star for that matter, is not a fixed point. Because of the Sun and Moon’s gravitational attraction, the Earth actually moves like a wobbling top. This wobble, known to astronomers as the precession of the Equator, represents a cyclical shift in the Earth’s axis of rotation. It makes the stars seem to migrate across the sky at the rate of about one degree every 72 years. This gradual shift means that Polaris will eventually cease to be viewed as the North Star, and sailors will have to orient themselves by other means. According to Cobo, the best point that we can use to orient ourselves is the Sun rising in the east above the Equator. As he points out, the very word orientation comes from the Latin oriens, which means east, or sunrise, while ‘disorient’ means losing direction, losing one’s way or, literally, losing the east. In Western culture, north is used to determine all other directions, yet the origin of the word itself comes from the Proto-Indo-European prefix ner-, which means down or under, but also left, and was commonly used as ‘left when facing the rising Sun’. Thus, in order to determine north, one needs to know the direction east. In 1569, the Flemish cartographer Gerardus Mercator, the first to mass-produce Earth and star globes, devised a system for projecting the round Earth onto a flat sheet of paper. His ‘new and augmented description of Earth corrected for the use of sailors’ made the Earth the same width at the Equator and the poles, thus distorting the size of the continents. Although Mercator created his projection (still used today in almost all world maps) for navigation purposes, his scheme led to a bloated sense of self for the northern countries, located at the top of the map, while diminishing the southern hemisphere’s sense of size and importance. The positioning of the northern above the southern hemisphere, and the distortion of their true size on most maps, has divided the globe into simplistic binary oppositions: First versus Third World; civilised versus primitive; developed versus underdeveloped countries. In fact, it would make more sense to divide the world into Aristotle’s Temperate, Torrid and Frigid zones, for it is not the southern hemisphere that has the greatest concentration of poverty, but rather the equatorial region. From the beginning, more than being purely representations of the physical world, maps have been projections of man’s sense of self-importance onto the space around him. They have often been influenced by imperial or religious interests, props to the privileged status of certain cultures. Cobo believes that many of the geopolitical, ideological and economic hierarchies that shape our vision of the world would ‘disappear’ if the globe were laid on its side and all maps were rotated 90 degrees counterclockwise, putting the east on top of the world and north with south spread out on either side of the Equator. It is true that in space, directions don’t exist. On Earth, however, east is our most universal orientation. One loses sight of the southern celestial hemisphere when facing north, and it is only by gazing east that one can see both the northern and southern constellations simultaneously as the stars pass by overhead. As our planet hurtles through space, whipping around on its axis, the Sun and the stars, time and the future, approach us from the east. There is nowhere we can better appreciate the movement of the skies, better understand our place in the universe, than when we stand on the line that wraps around the middle of the Earth and watch the heavens streaming towards us. Corrections, May 29, 2013: The essay previously stated that Medieval mapmakers did not know that the Earth is round. The sentence has been removed. It was also implied that wind comes from the East. This too has been altered. | Kurt Hollander | https://aeon.co//essays/why-we-should-turn-the-world-map-on-its-side | |
Ethics | You are entitled to believe what you will, but your beliefs must be subject to criticism and scrutiny just like mine | Here is a true story. A young philosophy lecturer — let us call him Shane — is charged with the task of introducing young minds to the wonders of philosophy. His course, a standard Introduction to Philosophy, contains a section on the philosophy of religion: the usual arguments-for-and-against-the-existence-of-God stuff. One of Shane’s students complains to Shane’s Dean that his cherished religious beliefs are being attacked. ‘I have a right to my beliefs,’ the student claims. Shane’s repeated interrogations of those beliefs amounts to an attack on this right to believe. Shane’s institution is not a particularly enlightened one. The Dean concurs with the student, and instructs Shane to desist in teaching philosophy of religion. But what exactly does it mean to claim ‘a right to my beliefs’? It often comes up in a religious context, but can arise in others too. Shane could just as easily be teaching Marxist theory to a laissez-faire capitalist student, or imparting evidence for global warming to a global warming sceptic. Whatever the context, the claim of a right to one’s beliefs is a curious one. We might distinguish two different interpretations of this claim. First, there is the evidential one. You have an evidential right to your belief if you can provide appropriate evidence in support of it. I have, in this sense, no right to believe that the moon is made of green cheese because my belief is lacking in any supporting evidence. This sort of right can’t be what Shane’s student is asserting. After all, the arguments Shane was asking his students to explore were, precisely, evidence for and against the existence of God. When the student complained, he did so to preclude this gathering and examination of evidence. He regarded the very examination of this evidence as an attack on his right to believe — and so can hardly be talking about his evidential right to believe. Instead, the student’s assertion seems to be what we might call a moral right to believe. The student is asserting that he has a moral right to believe what he will, even if there is not sufficient evidence to establish that belief — indeed, even if the preponderance of the available evidence suggests the belief is false. This moral right to believe is a truly curious beast. We can have moral rights to different sorts of things — most obviously, to commodities (food, shelter), freedoms (of thought, expression, pursuit of happiness) and treatments (non-discrimination). But what exactly, does it mean to have a moral right to any of these things? While many people claim not to understand the notion of a right, the idea is really very simple. The basic idea — courtesy of the late American philosopher Joel Feinberg — is that a moral right is a ‘valid claim’. To have a moral right to a certain commodity, freedom or treatment, is to have a valid claim to it, and against any attempt to block your access to it. If you have a moral right to, say, an education, then you have a valid claim to that education, and a valid claim against others that they do not prevent you receiving it. A claim is valid if it is implied by a true moral theory — or, if you don’t believe in that sort of thing, a moral theory that is better than its competitors. You don’t need to make, or even be able to make, the claim in question: someone else can do that for you. A child would have a right to an education even though it cannot understand this right and so not be able to claim it. You have the right to be completely uninterested in views that you find stupid or abhorrent Applying this analysis, we can infer that if you have a moral right to a belief then everyone else has a duty not to deprive you of this belief. A good way of depriving a person of a belief is by effectively criticising that belief: showing, for example, that it’s illogical or lacking in evidential support. Some people conclude that, if you have a moral right to a belief, everyone else has a duty not to criticise that belief. I suspect this is an increasingly common way of thinking about the right to believe. It is, however, untenable. Freedom of expression is among its more notable casualties. Suppose you have the moral right to a certain belief. It doesn’t matter what that belief is: suppose it’s the belief that God created the universe. I, similarly, have the moral right to another belief, the belief that the universe has a purely natural origin. My belief — assuming we think of God as supernatural — entails that your belief is false. So, whenever I advance or argue for my belief, and defend it in public, I am simultaneously arguing that your belief is false and should be rejected. To advance my belief is to criticise yours, and vice versa. A moral injunction against criticising the beliefs of others quickly turns into a moral injunction against advancing your own beliefs for the simple reason that beliefs are often incompatible. There can, of course, be circumstances in which expression of belief can be legitimately suppressed (eg, ‘I believe we should lynch him,’ when said to a lynch mob), but adoption of such suppression as a general consequence of the moral right to believe leads to a near-universal ban on freedom of expression. Even worse, many think that a moral right entails a duty of protection. If you have a right to something, then I should not only refrain from blocking your access to it but I also have a duty to help you, should others try to do so. In the case of the moral right to believe, it seems I would have a duty to attack my own belief in order to safeguard your right to your belief. Similarly, you would have a duty to attack your belief in order to safeguard my right to my belief. The idea is clearly incoherent. It is fairly obvious what society we would become if we understand the right to believe as the duty to refrain from criticism: a group of largely uncommunicative individuals, unable to advance their own beliefs for fear of criticising the beliefs of others. It might be that this is a possible future for liberal societies. It is not, however, one to which we should aspire. From Shane’s story, we now switch to that of Wayne. Unlike Shane, Wayne is a fictional character, bearing no resemblance to any one person. Wayne has a problem. He has a tendency to espouse views that are both monumentally stupid and often deeply offensive. Nor is he shy in letting everyone know what these views are. Most people cross the street to avoid him, and so he has distinct difficulties in garnering an audience for his views. Does this mean that Wayne has any cause for complaint? Wayne suspects he does. His right to free speech, he claims, is being undermined by other people’s utter lack of interest in what he has to say. If Wayne thinks this, he seems to be using a certain interpretation of the right to believe — that he has a moral right to his beliefs in the sense that other people have a duty to listen to, or be interested in, his beliefs. I have met people who think this. But it is highly implausible. ‘Shut up, I’m watching TV!’ might be rude, but I doubt it is a violation of someone’s right to believe. ‘Discrimination’ is a bad word these days, but on the other hand, in sifting through the possible beliefs we might hold, should we not be discriminating? In this imagined scenario, we are not dealing with a case of discrimination against Wayne — the person. It is not as if people say: ‘Oh, there’s Wayne. I’m not going to listen to what he says. He’s one of them’ — whatever ‘them’, in this case, denotes. Rather, the discrimination in question is directed at Wayne’s beliefs: ‘Oh, there’s Wayne. If I have to listen to another of his stupid beliefs again, I might just explode.’ The truth is that we are all creedists. Creedism is discrimination against beliefs. Creedism sounds a little like racism or sexism, and so people might assume it’s a bad thing. But it’s nothing like these things, and not a bad thing at all. According to racism and sexism, the properties that make a person worthy of certain commodities, freedoms or treatments reliably track certain biological properties — possession of a certain skin colour, or possession of a penis, etc. These views should be rejected on the grounds that they are straightforwardly false. Consider the case of Jayne, who believes that the universe was created by a flying spaghetti monster: ‘Pastafari, praise his noodly appendage’ Creedism is different. I doubt I would associate with Wayne, if he existed. Similarly, if your next-door neighbour is an enthusiastic supporter of Hitler’s policies vis-à-vis other races you might decide not to invite him to your dinner party. In this, you would be doing nothing wrong. Still less do you have the duty of providing him with a forum for his beliefs. Creedism concerns one right in particular — and the right is yours not theirs: the right to associate and not associate with whom you choose. There are two vital qualifications required, however. First, your discrimination against your neighbour must be directed not at who they are but what they believe. If you cannot legitimately decline to associate with someone because of who or what he is, you can certainly do so because of what he believes. If you have this right, then exercising it cannot be a violation of your neighbour’s rights. Second, there is no question of depriving your neighbour of his moral and political rights in general. His right to vote is not taken away just because he believes stupid things (although it might be if he acts on them). Nor will the right to promote his beliefs be taken from him. The right exercised is your right to free association and nothing more than that. You have the right to be completely uninterested in views that you find stupid or abhorrent. Having the right to a belief cannot be explained in terms of other people having a duty to be interested in your belief. There is no such duty. The idea of a moral right to believe came to prominence in the second half of the 19th century, in the form of a dispute between the American philosopher and psychologist William James and the English mathematician and philosopher W K Clifford. James thought that, under certain conditions, you have a moral right to believe, in the absence of supporting evidence. If the belief concerns an option that is living (in the sense that it has genuine appeal), forced (in the sense that there are only two possible outcomes, one good the other bad) and momentous (in that the stakes involved are very high), then I have a moral right to believe. For these reasons, I might have a moral right to believe in life after death, for example. Clifford, on the other hand, denied this. Your right to believe extends only as far as the supporting evidence you have for your beliefs: the moral right to believe collapses into an evidential right to believe. It might seem that I’ve been siding with Clifford. In fact, I think we can make sense of the idea of a moral right to believe. However, this sense is unlikely to be of comfort to Wayne or Shane’s Dean. Part of Clifford’s case was based on the inseparability of belief and action. If you have stupid beliefs, then generally you will do stupid things. However, thinking of it in this way blurs our target. For now, it is not clear whether we are dealing with the moral right to believe or the moral right to act on our beliefs. Let’s consider the case of Jayne, who believes that the universe was created by a flying spaghetti monster: ‘Pastafari, praise his noodly appendage.’ Let us suppose that this belief, while strange, is otherwise harmless. Jayne is not, for example trying to force public schools in Kansas to teach the Pastafarian theory of creation. She has no interest in converting others. She keeps her belief in Pastafari very much to herself. She won’t lie about it, but neither does she broadcast it. Suppose Jayne’s family, who are aware of and increasingly perturbed by her belief, stage an intervention, and have her forcibly lobotomised. This is a violation of Jayne’s right to autonomy. Therefore, one sense in which Jayne has the right to her belief that Pastafari created the world is that she has the right not to be disabused of this belief by way of an un-chosen lobotomy. Of course, a lobotomy is a notoriously blunt instrument, and will have dire consequences for Jayne’s general level of cognitive functioning. But we can imagine more subtle options: hypnosis, brainwashing, or highly skilled and keenly targeted brain surgery that leaves her cognitive functioning intact. Nevertheless, when other people manipulate Jayne’s brain in this way — even if they think they are doing it for her benefit — her right to autonomy has been contravened. A way of understanding the moral right to a belief, therefore, is as a special instance of a more general right to autonomy: Jayne has a moral right to believe what she wants and the basis of this right is autonomy. The morality of disabusing people of their beliefs — and, in particular, whether this violates their autonomy — concerns not what you do, but the way that you do it. The late American philosopher Wilfrid Sellars drew a useful distinction between what he called the ‘space of reasons’ and the ‘space of causes’. When we try to convince Jayne to abandon her belief by appealing to things such as logic, argument and evidence, we operate within the space of reasons. We might point out facts pertaining to the fossil record, the Burgess Shale, or the Darwinian account of evolution. Such an approach might not be successful (Jayne might think that the Burgess Shale is Pastafari’s way of testing her). However, there is surely nothing morally objectionable to it — particularly if we abide by the rules of good manners and common decency. Lobotomies, hypnosis and brainwashing all fall outside the space of reasons — belonging, instead, to the space of causes. When we operate within the space of reasons, we are basically saying to Jayne: ‘These are my reasons for not believing in Pastafari, and this is why I think they should be your reasons too.’ But the changes that happen to Jayne when we invade her brain are precisely things that happen to her rather than things she does herself. Changing Jayne’s belief in this way is a violation of her autonomy. The difference is like being persuaded to go for a run for health reasons and being tied behind a car and forced to run — essentially the difference between rational persuasion and force. Murdering or brainwashing unbelievers is not part of Jayne’s moral right to defend her belief Part of the explanation of Jayne’s moral right to believe is that no one has the right to take away her belief using methods that lie outside the space of reasons. She has a moral right to believe in the sense that she has the right not to be stripped of her beliefs by force. This corresponds to one component of Feinberg’s analysis of a moral right. Jayne has a moral right to believe in the sense that she has a valid claim against others not to strip her of her beliefs by force. Feinberg’s analysis also contains the idea of a claim to as well as one against. This can be incorporated into Jayne’s right to believe. She has a valid claim to her belief that Pastafari created the world in the sense that she can defend it, if she so chooses, in the public arena. However, she is restricted to using methods that belong to the space of reasons — persuasion rather than force. Murdering or brainwashing unbelievers is not part of Jayne’s moral right to defend her belief. She is entitled to advance her belief in the public arena using the same methods that her opponents are entitled to use in dissuading her of that belief. This, then, is how to understand the moral right to believe. Other people have a duty not to deprive you of your beliefs using methods that fall outside the space of reasons. Other people can use persuasion but they have a duty not to use force. You have the right to defend your belief, in the public arena, using methods that belong to the space of reasons — you can defend your belief through rational persuasion but not force. The idea of a moral right to believe is the conjunction of these two claims. There will be, of course, unresolved practical issues. Sometimes, the dividing line between rational persuasion and force is not entirely clear. No doubt many of us suffered, between the ages of five and 18 — and perhaps later — at the hands of the ‘That’s the way it is and you’d better accept it if you want to get a job’ brand of education. However, while rational persuasion and force might slide by degrees into each other, the absence of a firm distinction is not the absence of a distinction. If it were, the existence of people of average height would entail that no one is short and no one is tall. Here I am concerned with the point of principle and not practice. What the Dean should have said to the student Shane is: ‘Yes, you have a moral right to your belief. You have the moral right not to have this belief taken from you by force, and you have the moral right to defend your belief using methods of rational persuasion. But that is all — philosophy of religion stays on the syllabus.’ To Wayne, we should say: ‘You have the right to your beliefs in the same way that Shane does — but you have no right to expect people to listen to you, associate with you, or be in any way interested in your beliefs.’ And to Jayne we should say: ‘Don’t let anyone mess with your brain.’ | Mark Rowlands | https://aeon.co//essays/everyone-is-entitled-to-their-beliefs-if-not-to-act-on-them | |
Art | Eduard Bersudsky chose to be mute in Soviet days. Ever since, his moving musical sculptures have done the talking | You press the pedal at the base of Eduard Bersudsky’s sculpture Piper (2013). The shadow on the wall moves, the cogs begin to hum, the little bell rings, and the pair of gendered fauns flex their legs to activate the dog typist at the typewriter hammering out memos lost to history. Tip, tap, tippity-tap, its tail sways, and the muscular fauns leer. They have wolves’ heads, just as the humanoid pair animating the giant buggy face is monkey-headed. They make the face smile, his eyes move this way and that, and his pipe — crikey, there’s a bird in his pipe! — go up and down. And are those tiny feline-ursine centurions armed with shields guarding the Piper, and what are they guarding it from? Unless, of course, this is a Jungian dream of the unconscious where beasts and innocents are deliciously free to copulate, poke fun at authority, snack on bugs, and squeak instead of talk? But these are rhetorical questions, because what matters here is the depth of your feeling. Faced by Bersudsky’s work, you’re stabbed with sorrow at the futility of the human endeavour and yet waves of belly laughter ripple through you. This magical spectacle seems, without words, to be telling you something essential, and you can’t stop pressing the pedal. It helps that you are standing inside a Gothic church deep in the Scottish Highlands, and that the church is Kilmorack Gallery — a secular shrine to the sublime, where art is God. But Bersudsky, the artist who painstakingly carved these figures and, like a magician-alchemist, animated them with a hidden mechanism, doesn’t care for such big words as God, Politics, History, or even Art. In fact, he doesn’t much care for words, full stop. Once, he stopped speaking for two years. Why? He shrugs. Some things need no explaining, and anyway, he was too busy carving fallen trees into lifelike animals for the park department of what was then called Leningrad. In those days in the 1970s, it was the only paid job in the Soviet Union available to a nonconformist artist branded an ‘enemy’ by a regime that was no one’s friend. Bursting with creative energy, the young Bersudsky carved animals for children’s playgrounds in the daytime, and worked through the night in his 12 square-metre bedroom in a communal flat. He spent his mute Soviet years building his first kinemats or kinetic sculptures, from odd bits and pieces: electrical plugs, the wheels of Singer sewing machines, factory motors he’d swap for a bottle of vodka. Blending the refuse of Soviet life with his crafted creatures, he constructed his own poetry of the absurd to straddle fantasy and satire, the lyrical and the grotesque. And because these kinemats were ideologically and aesthetically wrong until perestroika came along, only a close circle of friends and aficionados saw Bersudsky’s Hieronymus Bosch-like creations — works such as Self-Portrait With a Monkey (2002). Self-portrait with monkey. Photo by Tatyana JakovskayaBersudsky’s wife and collaborator Tatyana Jakovskaya switches it on for me at the couple’s Glasgow home and workplace, the Kinetic Theatre Sharmanka. A monkey hangs by the neck from a chain attached to the testicles of a mechanical giant with a horned head. When the light comes on, the giant turns into an organ-grinder, tapping along with a prosthetic army-booted foot to the Russian song ‘Separation’. ‘Farewell, my homeland, my beloved,’ sings the bass voice, recorded by a friend of the couple who was forced into political exile in the 1970s. Just as the ever-morphing monkey-man and monkey-woman are mascots of subversion in his work, so the organ grinder is emblematic of the cyclical Bersudskian universe. It crops up in his disturbing early drawings, as well as in his later works. And it gives Sharmanka its name, which is a wink at the barrel-organ’s arrival in Russia with the popular tune ‘Charmante Catherine’. I wonder whether the incessant banal chatter of the world drives a wedge between the sane private self and the strident public self On and on the organ grinds, friendly yet sinister, childish yet sorrowful, repeating an expired tune of hope to a ghostly audience. This tongueless tune lodges in your psyche like an existential drip of futility and persistence: the musical equivalent of Samuel Beckett’s line ‘I can’t go on, I’ll go on.’ The genius of Bersudsky’s world is to tell nothing yet reveal all, through recurring dreamlike objects like the organ grinder, the little person’s only weapon against a power-mad world. Carried along by this funny-sad shadow play into a state of hypnosis, I’m reminded of the Argentine-born writer Alberto Manguel’s words: ‘True experience and true art… have this in common: they are always greater than our comprehension, even than our capabilities of comprehension.’ But there is one thing I do comprehend. We are all sad children who have forgotten how to play. We are hanging on to the broken merry-go-round of history where failure and injustice stalk us. The circus of tyranny goes round and round, and the cultural currency — be it advertising slogans, political fashions, or the diamond skull of corporate art — belongs to the powerful of the day, blithely unaware that tomorrow they too will be bug-eyed exhibits in the Museum of Broken Things. No wonder Bersudsky is mistrustful of language and speaks through his carvings and cogs. He is already making, from the scraps of today, the kinemat of tomorrow. He can’t stop making, whether snowbound in Leningrad or in spring-blossomed Glasgow. In his back room workshop, the viscera of modernity is filed in drawers and labelled: ‘springs’, ‘bells’, ‘clocks’. His work in progress is a flock of birds that buzz around a…a sort of … It chokes me up. I can’t describe it. What are you working on? I ask Eduard. Oh, just something to keep me amused. He adds, in Russian: ‘A child who plays doesn’t cry.’ And he shuffles off to make tea with sliced lemon. With her decades of experience as theatre director, Tatyana will be the one who breathes light and music into his work, together with the help of her son, Sergey Jakovsky. He was still a boy when they emigrated to Scotland 20 years ago. Eduard Bersudsky building the Millennium Clock, 1999. Photo by Tatyana JakovskayaThe couple are indifferent to their surroundings, their material needs so simple that they leave their theatre home only when they run out of food or need to travel for a show. They inhabit a parallel dimension, and I would like to live there with them. ‘To create an alternative world — that is the artist’s true happiness,’ Tatyana says as Eduard spoons half a jar of honey into my tea. ‘Not money. We are rich because we do something we love.’ Eduard and Tatyana are the embodiment of a rare thing: the unity of life and art. Their life, like their art, is a form of humanist resistance as well as an act of love, in the spiritual sense of the word. There is precedent: the two of them remind me of another subversive cult couple with a lasting legacy, the ‘Bonnie and Clyde of art’, Niki de Saint Phalle and Jean Tinguely, makers of fantastical sculptures. For de Saint Phalle, play was art, and so she played ‘furiously’. For Tinguely, ‘le rêve est tout’, the dream was everything. They married in 1971 but had no children together. Instead, their fertile artistic coupling lasted for the best part of the last half-century. After Tinguely’s death in 1991, Bersudsky received the gift of his Parvalux electrical motors and they found their way into the next kinemats. ‘If you want to work on your art, work on your life,’ Anton Chekhov once said. Meeting Eduard and Tatyana forces me to ask whether, for most of us living in this age of sound and fury and Twitter, there isn’t in fact a neurotic chasm between the two. And taking this further, I wonder whether the incessant banal chatter of the world drives a wedge between the sane private self and the strident public self. I think it does, I can hear it in my head. Tip, tap, tippity-tap, all those memos lost to history. When I’m writing, dancing, learning to play the accordion, fooling around with the dog, or making a salad — all forms of play — I’m happily entranced. I can’t lie or be lied to when I’m at the source of creation. When I’m out in the world, virtually or literally, my ego fanned out in desperation like a peacock in mating season, I am distracted by the glitter of worldly promise. This is when the rot of adult delusion sets in. It’s a form of insanity. How to avoid this fracture of the integral self? How to stop ourselves surrendering to the cynicism of a corrupt world, with its self-trumpeting dictators, maniacs disguised as gurus, and con men with fortunes and painted clown’s smiles? Meeting Eduard and Tatyana has shown me one way: if you have an inner sanctuary, you have less need to make spectacular outings in your emperor’s new clothes. The creative genius of Eduard Bersudsky and Tatyana Jakovskaya is incidental to their life philosophy; in other words, there is hope for all of us. In fact, to seek a realm of freedom and truth within is perhaps the only meaningful task in our lives. To see Piper in action is to be reminded of what we are: orphaned children whose only path home is to remember how to play and dream again. Go on, press the pedal again. Nobody’s looking. http://www.youtube.com/watch?v=mQSAuqSPMko | Kapka Kassabova | https://aeon.co//essays/nothing-is-happier-or-sadder-than-life-s-union-with-art | |
Ethics | Filthy and violent it may be, but life is still precious for the world’s street children. Can you look them in the eye? | Years ago, along the cacophonous roads leading from Ramses Train Station in Cairo, I came across a small girl with a bright red headscarf weaving her way in and out of the slow-moving traffic, her bare feet shrouded in a haze of exhaust fumes. I watched as she banged on the windows of vehicles, the red of her headscarf a lurching traffic light forcing buses and trucks to a sudden halt. When drivers opened their windows to shoo her away, the girl pleaded for a chance to say something, running alongside the traffic to keep up. Uncowed by dismissive hand gestures or hastily resealed windows, she visited car after car, like a bee in a field of giant flowers, looking for a chance to speak to someone. She accepted the dregs of a water bottle and the tossed remains of snacks, but it wasn’t until she made one driver laugh, then another, that she finally received two precious coins. Intrigued by the scene, I asked some Egyptian colleagues to help me talk to the girl. We found her sitting in the shade, against the pillar of a flyover, with a tiny boy, of three or four, huddled beside her. As we approached, the children stood up, ready to bolt, but we assured them we weren’t officials; we just wanted to talk. The girl was called Nadra. She was nine. As she spoke, her little brother nudged his head into the grimy folds of her dress and hid. Nadra explained that their widowed and bedridden mother spent her days in a lightless one-room shack on the outskirts of the city waiting for them to return with whatever they could find — some food, a bit of money, an item of clothing, a blanket for their shared bed mat. Each day Nadra walked to central Cairo, dragging her little brother with her, to earn what she could by telling jokes to drivers. The jokes were based on improbable scenarios involving an Italian, a Frenchman and an Egyptian, or they were tales about hapless farmers from Upper Egypt getting lost in the maze of Cairo’s streets. Sometimes they were simply puns, a play on words. Nadra made them up from snippets of conversation she’d overheard or memorised from the television screens she paused to gaze at on her way back to her mother. It seemed miraculous to me that a hungry, unschooled nine-year-old, who cared for a straggling younger sibling and a sick mother, could find it in herself to invent jokes and one-liners. But with her jokes, Nadra had found a unique way of wresting life-saving money from drivers who occasionally relented and dropped a small coin in her filthy palms, no doubt making sure not to touch her. Jokes were a weapon to shatter the indifference around her. Ever since that defining meeting in Cairo, the resilience and ingenuity of street children has interested me, and despite numerous new encounters with street children across the world, Nadra has continued to haunt me; thanks to her, I began to wonder what society might really look like from the point of view of a street child, and to think about what it took to withstand the ordeals of homelessness. I yearned to know how Nadra managed to keep her imagination undefiled by the horrors of her life. Such questions deeply affected my work with excluded children and their education. At the time I met Nadra, I was part of a project studying the conditions of children eking out a living on the streets of the Egyptian capital. Along with other colleagues from United Nations agencies, we interviewed a whole range of children. There were garbage-pickers and shoe-polishers, children who hawked their wares in market places, who led drivers towards spare parking places, cleaned windscreens or ran errands, chased tourists for loose change, or simply scrabbled through bins. Like street children in many countries, they described how the street was an unforgiving provider of life; how it could easily, and quickly, become the agent of death. They revealed how their waking hours were one long quest to subsist: scavenging, begging, bartering, scratching around for any means to pull through. Their eyes permanently scoured the pavements and roads for anything that might turn their lives around — a cigarette butt, a dropped banknote, a discarded piece of food, an abandoned scrap of clothing. Every breath they took had to be filled with resourcefulness. The phrase ‘street children’ is a much-used catch-all term for heterogeneous groups of children. Some live solely on the street, sleeping rough, finding shelter where best they can. Some spend their days in public spaces before returning to a family or a similar support structure in the evening. Others still live with their families on the streets. Overall figures don’t necessarily allow for these distinctions. The United Nations Children’s Fund (UNICEF) estimates there are currently more than 100 million street children worldwide — an estimate that is often quoted, with all categories of street child included. But it is those children who live solely on the street, away from a consistent adult presence of any kind, who are perhaps the most emblematic of the phenomenon. Research shows that these children leave, or are forced to flee, their homes for many reasons. Family breakdown and the death or illness of a parent are prime factors but, equally, natural disaster, conflict and abuse play their part. While escaping to the streets is often a child’s only solution, the street provides an ephemeral freedom. It becomes mother, father, school and home. Survival rates are unsurprisingly low. Once on the street, a child can quickly get sucked into a life of violence and sexual exploitation, trafficking and substance abuse. Their existence is overshadowed by the urgent need to find a safe place to sleep and shelter. Those who do survive become forever alienated from mainstream society — and all the more menacing to it as they grow older. There are cardboard boxes for beds, a few clothes hung over cracked rusting pipes, water collected in plastic bottles Beyond the perpetual consternation of seeing young faces aged before their time, there is usually one detail that remains engraved in the mind after meeting a street child. That detail is often more potent than the generic attributes shared by many homeless children — premature deep wrinkles, raw eczema and psoriasis on the hands, rotting teeth, bodies stunted by malnutrition, patchy hair, lips cut with scabs, eyes dulled by substance abuse. One street boy in Mali, I remember, wore a pair of broken headphones round his neck. The end of the flex hung down to his bloated belly where his navel was buzzing with flies, a mass of wings, simmering and infected. I recall a bleeding boy of only three or four in a doorway in Maputo, the capital of Mozambique. He’d been beaten by a group of older children and was hunched over in the fetal position. His clenched fist clutched a hunk of bread that he had valiantly refused to surrender to his assailants: dry bread saturated with blood. Then a girl in Morocco, on the edge of Zagora and with the desert behind her, who put down the tray of food she was selling and showed me how she could write her name. A billowy sleeve concealed her hand as she traced into the dirt the letters of the only word she could spell. It was as if she’d conjured it from some hidden compartment. And in Bucharest, Romania, I watched a boy in a sagging, buttonless overcoat upturn the bins outside McDonald’s. With expert skill, he flicked through the rubbish, prising open boxes, rooting out unfinished food. He chucked the remains of burgers over a wall to some waiting friends. Then — like a champion smoker attempting to accommodate 50 cigarettes at once — he rammed as many chips as he could into his mouth and sucked on them. Munkhbat changes the date on his watch. Apart from the clothes he is wearing this is his only possession in life. Photo by Richard WainwrightIn Mongolia, subterranean societies, free from adults, have been created by street children seeking refuge from hostile strangers, and the biting cold. Many of the buildings in the central part of the capital, Ulaanbaatar, are heated, thanks to a labyrinth of pipes built in the Soviet era to carry scalding water from power stations on the edge of the city. Manholes dotted around the pavements lead to grubby cavities where makeshift platforms can be set up above the water pipes. Vagrant minors live in groups, the oldest of them often taking charge in an unstructured way. Disease is rife, but a semblance of a home is created amid the dark and fetid heat. There are cardboard boxes for beds, a few clothes hung over cracked rusting pipes, water collected in plastic bottles. Once children retreat into this world it is hard to persuade them to resurface. The purposes of mainstream society soon wither away, as does trust in fellow humans. A woman who ran an education project for deprived youths in Ulaanbaatar told me that the greatest issue she faced was persuading the street children of the city to take advantage of the washrooms she provided. The children understood that it would benefit them to have a warm and disinfecting shower, but the inconvenience of removing and then replacing the newspaper they wrapped themselves in to keep warm was too great a challenge. One child took so long to rebind himself in new layers of newspaper that he missed his only literacy lesson. Others preferred to take any food they were offered and disappear back underground, unwashed, only to be further shunned by society as a result. The police of Ulaanbaatar regularly do the rounds of the city to pick up stray children. Any they find are taken to a holding centre on the outskirts of the capital and kept there until they can be filtered off to various state institutions. Some then escape back to the street, only to be rounded up again later. In 2008, I was talking to the well-disposed policeman in charge of the holding centre when a van with the latest batch of children arrived. The back doors opened and a jumble of spindly legs, ragged clothes and knotted hair spilt out. The children were herded into the building and lined up in single file. There was a queue of up to 30, some already in their teens. Their hacking coughs and scratching spread in a ripple down the line. A toddler with bare legs and thin hair was passed from one teenager to another to hold. Once on the floor she walked up and down the queue of children looking for familiar faces, tugging on legs she knew. At no point did she look towards the adults present: we were unfamiliar and ghostly presences with no relevance to her life. In the neon-lit centre, the children breathed and acted as one, acutely aware of the divide between them and everyone else. The police centre, its naked lighting, lino flooring and ringing phones: this was the hostile world they had fled by heading down the manholes into the dark and diseased warmth. One by one, the children were asked to register their details with an official. Some, inevitably, had been to the centre before. Others were new to the police. Some had no idea of their birthdays, original addresses, or the names of their parents. After being washed and having their hair cut, the children reappeared in assortments of ill-fitting second-hand clothes that had been donated to the centre; Mickey Mouse T-shirts with tracksuit bottoms, shorts with wool jumpers, hoods and gloves. Some had found no tops their size and had covered themselves as best they could, their ribs sticking out from under scarves or shrunken vests. The toddler girl wandered aimlessly in purple pyjamas with fluffy slippers. In a heap in one corner lay the rags that the children had arrived in — putrid jumpers and trousers, chafed shoes with split soles. I once attended a workshop where we were asked to note down those characteristics of street children that could be built on in education programmes. Entrepreneurship, combativeness, perseverance and a critical eye were listed by several participants. Indeed, the entrepreneurship of street children is often put forward in education schemes. Many display impressive dexterity with numbers, having had experience counting small change, and bargaining with people they suspect will cheat them out of their meagre earnings. Their ability to talk and persuade can be exploited, too. Yet systems of schooling for which children require certificates, addresses and all the paraphernalia of formal education are not best suited to children of the street. More flexible and innovative approaches to learning, weaving in counselling, health care, life skills, technical and vocational training, have more chance of having an impact. The Lotus Children’s Centre in Mongolia, for example, has a constant eye on breaking the cycle of poverty while children are in its care, training them for future employment. Contact with vulnerable families is maintained and nurtured where possible. The Moroccan NGO Bayti follows a similar approach, putting an emphasis on socialising skills for re-entry into mainstream society. The Fundación Renacimiento in Mexico City offers bakery, carpentry and computing, as well as electrical engineering programmes to former street children. To qualify for these courses, the children have to pass through a staged programme that requires them to renounce drugs and violence of any kind, and to build a specific life project with counsellors and educators. Rebuilding a life is no easy undertaking for children who carry layers of pain, and many NGOs, on all continents, have come up with innovative concepts and support structures for the process. They provide essential care to those with nowhere else to turn. At a wider level, UNICEF, UNESCO and other organisations use the UN Convention on the Rights of the Child as a guiding framework, attempting to influence policies and governmental attitudes. Central to all of this, though, is the need to recognise what street children have been through, to work with their stories and listen to their past. Listening was part of the rationale behind the campaign that I helped to establish in 2008 with fellow author Lauren Child, under the auspices of UNESCO. Called ‘My Life Is a Story’, it was designed to help street and excluded children to relate their personal histories by providing a platform for voices that had never been heard. The campaign is now over, but some powerful accounts were garnered. They revealed the almost unimaginable tortures that many street children go through and how vulnerable young lives can quickly tip into abomination. One boy from Alexandria in Egypt described how he ran away from home when his new stepfather regularly chained him up in a cemetery overnight as a punishment. Another boy in Latvia described how he would occasionally visit his addict mother in a squat where she picked the fleas off herself and placed them in a see-through plastic bag. In Mexico, Jesús was sent to live with relatives, but got on the wrong bus aged 10 and ended up at the other end of the country, penniless and homeless. Girls in Senegal and Namibia, who had been employed as underage maids with wealthier families, told how they had been ruthlessly abused and worked to the bone before they’d run away to the streets. These objects can become almost talismanic: a found bracelet, a lucky plastic spoon, or a crinkled photograph are all possible proof of a continued shared humanity Beyond the intricacies of life’s calamities, what emerged through these stories was how vital is a sense of personal narrative to feeling human, all the more so when that narrative is acknowledged by others. The street children who contributed to the ‘My Life Is a Story’ campaign found it hard to believe that anyone could be interested in their lives, their voices or their opinions. More often than not, street children have been stripped of any sense of themselves, of their own uniqueness and significance. Like the boy with the battered headphones in Mali, they cling to any object that might yet give them a modicum of dignity or meaning in the eyes of others. These objects can become almost talismanic: a found bracelet, a lucky plastic spoon, or a crinkled photograph are all possible proof of a continued shared humanity. When the connection to others is irredeemably lost, there is little for street children to hold onto. The common narrative they might have once shared with society simply splinters, and remembering the past becomes pointless. Substance abuse is just one way to obliterate the story they once inhabited. A young boy in a government run orphanage in Ulan Bator. Having previously had a bad reputation they have now been cleaned up and well run. Photo by Richard WainwrightAs part of the ‘My Life Is a Story’ campaign I would try to raise awareness in British schools by disseminating the real-life stories of street children. I would begin my talks with a series of flashed-up images: a rough shelter on a station platform, a rubbish dump with foraging children, a boy cleaning a windscreen. It was interesting to see how children accustomed to comfort reacted. Many understood the notion of running away. Nor was it uncommon for pupils to say that they’d often thought about what it might be like to survive on the streets, to have to pull through alone, unaided. Most children were able to imagine losing everything and it terrified them — their fears having been triggered by seeing homeless people on the streets of their own cities. When we touched on the specific challenges of survival, several pupils spontaneously announced that they would prefer to steal than to do menial or degrading jobs. Others said they would hang out at the backs of restaurants and plead for food. Many more said they would want, above all, to find a safe place in a park or shopping centre to sleep in. All quickly realised that being on the street would, at some point, put them in conflict with the police in one way or another. Such discussions generally veered towards a kind of empathy with those who were dispossessed but, undeniably, huge gaps remained. We were still in the realm of theory. When all was said and done, street children were a different type of human for most British school kids. Their physical pains, their mental anguish, their diseases, their joys too, belonged to a universe that British children, as a rule, couldn’t really grasp. Schoolchildren are not alone in that perception. To view street children as different, and separate, is perhaps an obvious way to live with the insupportable reality of their plight. Of course, it goes without saying that street children are no different from our children, from ourselves (how we once were), but to accept this truly, and to live with it, is hard. It undermines one of the most fundamental and commonly shared foundations of all human societies: that we care for, and protect, our children. Instead, the most vulnerable and youngest are often forced into the role of outcasts. The child becomes untouchable, a pariah — alone, assailable and exposed to the abjectness of the world. On my return from Mongolia, I remember repeatedly feeling bewildered by my own young children. As I got them ready for bed, I found myself struggling to chase away insistent images of the Mongolian children in the heating vents. Yet I had to banish those very fresh memories in order just to be with my children. I became quickly frustrated by their complaints about life: ‘I don’t like my peas and mash touching’; ‘I’m not watching Robin Hood again’. These were the capricious banalities of children used to comfort, and I wanted to yell at them that they didn’t realise how lucky they were. He began to construct a picture of her life; did she sleep in a station, he asked, or in an empty, derelict house with a sister, or a father? However, my frustration hid many layers of unresolved emotion. I had hoped that my recent experiences would anaesthetise me to the pettiness of family life. Instead, I felt a real bleakness, and its slow bitterness released itself into my parenting. Mongolia had made me doubly aware how precious childhood was but, equally, I was repulsed by my own children’s innocence. Their cleanliness, abundant food and clothing felt like an obscenity. On more than one occasion, I had to pull the car over, the engine running, and stare into space for a few seconds while the children bickered behind about their car seats touching each other, their feet kicking me in the back. A silent rage had overcome me and I didn’t know how to deal with it. On a recent trip to Istanbul, alone with my 10-year-old son, we watched a street girl, no older than eight braving the bitter winds coming in off the Bosphorus in a threadbare T-shirt as she desperately tried to repair her broken accordion. Each time she mended it, it would play for a few minutes before breaking again. My son was entranced by the oversized men’s shoes she wore and by her matted, feral hair. I could see him looking around at the few tourists in a bid to identify her neglectful parents. Their absence was totally unnatural and foreign to him. He wanted to be reassured that she wasn’t alone in the city, without a safety net. The girl’s shoes and accordion provided an opening onto her world, an aperture through which we could discuss her predicament. We talked about how she might have acquired her accordion and how she might have learnt to play. Who did her shoes belong to? I’m not sure I got the conversation right and, when my son didn’t understand, I found myself getting blunt (and guilty, too, aware, at the back of my mind, that a parent’s role is also to protect a child from the asperities of life). An expatriate friend in Eastern Europe told me she was once bold enough to buy hamburgers for three street boys in order to show her own offspring how charmed their life was. She’d just handed over the hamburgers when another band of vagrant children appeared out of nowhere, each demanding a burger of their own. My friend ended up buying at least 10 burgers with a string of children’s dirty faces squashed up against the window of the fast-food outlet, watching her every move. Things had been brought to a violent halt when a customer prevented her from reaching the counter again. A row ensued in which the customer told her he couldn’t bear to be put off his food by the sight of vermin any longer. It was an episode my friend’s children were unlikely to forget. In Istanbul, it was only when we were accosted by a Syrian family begging for money that my son became fully engaged with the subject of homelessness and precarious living. Here was a family who had fled the violence of civil war. He had heard about the Syrian conflict on television, and war was a feature of books he had read and films he had seen. He referred back to the girl with the accordion. Maybe, he thought, she had come from Syria? He began to construct a picture of her life, imagining her trudging through the mountains and the cold to reach Istanbul. Did she sleep in a station, he asked, or in an empty, derelict house with a sister, or a father? What did she think about as she played the accordion? I could see my son battling with concepts that were far from his life and that directly challenged many of his beliefs. The next day, we looked for the girl with the accordion. She had disappeared. But the orange cloth she had used to collect money was still on the pavement, and people were walking round it. Perhaps they were trying to work out where it had come from? An absence and a puzzling presence. And no resolution for my son. Street children are the product of many compounded flaws: our continued failure to halt endemic violence against women and children, our incapacity to stem extreme poverty, our inability to resolve conflicts, or even to deal equitably with natural disasters. Locking them up, or repressing them, won’t resolve any of these global issues. It won’t remove the fear and guilt abandoned children inspire in us either. It is easy to say that the young lives of untamed children and adolescents have nothing to do with us, or that they live in a dimension we cannot understand. And yet each street child I have met has had a unique story, and a richness of experience that holds lessons for all. Maybe that is why it pains me all the more that street children are ignored, barely acknowledged. They’re forced to exist in a world parallel to ours, and, out there, in their other world, in their bus stations and gutters, in the filth and vileness of their refuse dumps, they survive as best they can, with the same emotions we all share. Our greatest insult to them is to remove their humanity even further by not recognising a part of ourselves in them. We diminish ourselves by refusing to look them in the eye. | Ben Faccini | https://aeon.co//essays/the-grim-intensity-of-a-childhood-on-the-street | |
Evolution | Troglodytes who couldn’t compete, or humans with complex culture? The mystery of our nearest relatives deepens | Among the oldest human objects that unequivocally defy practical explanations are shells punctured with holes. Try as you might, it’s hard to see them as anything other than beads or pendants. Traces of ochre at sites occupied by ancient humans offer earlier hints of adornment, perhaps even of symbolism, but sceptics argue that the pigment might have been used for some practical purpose: tanning hides, for instance. In perforated seashells, however, we find the first truly compelling tokens of expressive humanity. Early humans must have reached beyond their immediate concerns in many ways that have left no traces. But they did reach for shells very early. Some 75,000 years ago in southern Africa, they gathered and pierced them, perhaps to make bracelets or necklaces. Twenty-five thousand years later and nearly 10,000km away, in what is now southern Spain, others collected naturally perforated shells. Independently and far removed from each other, humans took similar paths into expressive culture. What does that tell us about the human mind? ‘That we’re dealing with the same human mind in both instances,’ said João Zilhão, archaeology research professor at the University of Barcelona. Zilhão is the first author on a 2010 paper announcing the discovery of three perforated shells at Cueva de los Aviones in Murcia in south-east Spain. He and his colleagues also found a half-shell with traces of pigments in it, as if it had served as a paint cup or palette. But Zilhão’s assessment of the cognitive power of his ancient shell-collectors is controversial. While the southern African people were modern humans, anatomically speaking, the ones at Cueva de los Aviones must have been Neanderthals. Neanderthals were the only hominins living in Europe 50,000 years ago; anatomically modern humans did not move in from Africa until thousands of years later. After their arrival, Europe saw an explosion in cultural creativity. This poses the question of what being anatomically modern means for being human. Were the Neanderthals just stocky northerners with heavy brow ridges, or were their minds built differently as well as their bodies? Were they innately capable of doing what modern humans did? Were they, in fact, us? Bearing, as they do, on the problem of human difference, these questions are sprung with tension. Neanderthals have long been portrayed as brutish, or cognitively inferior to modern humans. Rebuttals of this picture are sometimes referred to as ‘anti-defamation’, not entirely in jest. To him, that’s like saying ‘people in the Middle Ages were cognitively handicapped because they could not use mobile phones’ Still, if these were all the same human minds, the ones in Europe did not explore their potential as far or as fast as the ones in Africa. The signs of modern human behaviour — beads, ochre, engraved patterns — are much stronger at Blombos Cave in South Africa than at Cueva de los Aviones in Spain. And they are not the the earliest. Shell-beads were collected even earlier in northern Africa, and one found at Skhul in Israel may be more than 100,000 years old. But these are flickering signals, not a steady glow of progress. For tens of millennia, as far as the record shows, modernity came and went. The Aviones Neanderthals might have been leading lives as sophisticated as those of many of their anatomically modern contemporaries. Zilhão insists that they were. Arguments for modern superiority are often based on anachronistic comparisons between the rich artworks produced by modern humans in Europe and the tenuous earlier hints of symbolism left by Neanderthals. To him, that’s like saying ‘people in the Middle Ages were cognitively handicapped because they could not use mobile phones’. A lot, therefore, comes down to how old various artefacts really are. And this has lately become one of the most controversial fronts in the study of the relationship between Neanderthals and modern humans. Things have turned out to be a lot older than they previously appeared. The confusion arose because radiocarbon dating is near the limits of its range in the critical period. When living tissue dies, it stops taking up carbon, so the date of death can be calculated from the proportion of naturally occurring radioactive carbon left in it. But after 30,000 years, only about three per cent of the radiocarbon is left, so even a small amount of modern carbon contamination can distort the dating results severely. Just one per cent of contamination can throw things out by as much as 7,000 years. Using improved techniques, Tom Higham and his colleagues at the Oxford Radiocarbon Accelerator Unit revised a series of key dates for the advent of anatomically modern humans in western Europe. A couple of teeth found in southern Italy in 1964, previously thought to be Neanderthal, were re-scrutinised in the most minute detail and attributed to modern humans. The new dates for the site showed that they were between 43,000 and 45,000 years old. The dates for a fragment of modern human jawbone found in 1927 at Kents Cavern, in south-west England, were pushed back more than 5,000 years, to between 41,500 and 44,200 years ago. Researchers such as Chris Stringer, research leader in human origins at the Natural History Museum in London, used to think that the Neanderthals had Europe to themselves until around 35,000 years ago. Now they are inclined to think that modern humans arrived in western Europe about 45,000 years ago, moving along the Danube from the Balkans and spreading rapidly across the region. Higham thinks the revisions are nearly complete. ‘We might push the dates back a little bit, but I doubt it very much,’ he said. ‘We’ve looked at most of the sites and the dates seem to be falling into reasonably coherent clusters and groups. The problem for us as dating specialists is at around 50,000 years ago — that’s really the limit of our technique.’ But his team did secure the 50,000-year date that puts clear prehistory between the Cueva de los Aviones shells and modern humans. This provides Zilhão with compelling support for the case that Neanderthals found their own way to symbolism or ornament — that they didn’t just copy the modern human interlopers. On the other hand, Higham and his colleagues have found evidence that threatens Zilhão’s argument that Neanderthals and modern humans living at the same time were culturally equivalent. In the Swabian Jura region of Germany, along the old line of the Danube valley, there are dramatic signs of a sudden and breathtaking cultural nascence among modern humans — flutes made from swan wingbones and mammoth tusks, ivory figurines with humanoid bodies but heads like lions, the oldest known sculpted female form: art beyond any shadow of doubt or argument. According to Higham’s new figures for one of the Jura sites, Geissenklösterle, that happened 40,000 years ago, or around 5,000 years earlier than previously thought. Zilhão doesn’t believe it. ‘There is absolutely no evidence at all for any of that stuff being made before 36—37,000 years ago,’ he declared. ‘That’s it. Period.’ Although he thinks the last Neanderthals persisted in Iberia until around that time, contact between them and moderns elsewhere in Europe occurred much earlier. The figurines, he insists, were made 5,000 years after modern human groups had entered that part of Europe and assimilated the Neanderthals. In the old days, says Robin Dunbar, professor of evolutionary psychology at the University of Oxford, Neanderthals were seen as ‘shambling troglodyte ape-men, with the emphasis on the ape’. Now, he observes, the divide among ‘the Neanderthal folk’ is between ‘those who are very determined to show that they are modern humans, and those who don’t think they’re quite modern human’. João Zilhão is as determined as anyone to challenge claims, which he traces back to 19th-century beliefs about racial hierarchy, that Neanderthals were innately inferior to modern humans in any way. Dunbar, however, considers that ‘they just weren’t quite in the same league as we were’, and Chris Stringer takes a similar view. ‘I think there is a difference between us and Neanderthals in behaviour,’ he said. ‘It’s not as big a gulf as we used to think, but there is a gap, and that might at least partly explain why Neanderthals died out.’ Perforated shells found at Cueva de los Aviones in Spain. Photo by PNASThe last common ancestor of Neanderthals and modern humans probably lived more than 300,000 years ago. The two lineages had accumulated long histories of separate evolution by the time of their final encounters in western Europe. Yet the draft Neanderthal genome, sequenced from the remains of three different individuals found in Vindija Cave in Croatia, adds a twist to this story. Announced in 2010, it contained the revelation that living people of Asian and European descent share from one to four per cent of their DNA with Neanderthals. That suggests a deep (and so far mysterious) history of interbreeding, especially in Asia. One thing we do know is that Neanderthals differed anatomically from modern humans. Their build was more robust. Their skulls were elongated, compared to the globular modern form. That alone might be significant. Dunbar recently collaborated with his doctoral student, Eiluned Pearce, and Chris Stringer on a study that investigated whether the difference in skull shape reflects differences in the organisation of the brain. The characteristic features of Neanderthal skulls include large eye sockets and a bulging ‘bun’ at the back. As the main area for processing visual information is at the back of the brain, Dunbar wondered whether the two traits might be connected. Just as a larger telescope dish will gather more light than a smaller one, and so will demand more computing power to process the data, larger eyes should work better in low ambient light levels, and will require more dedicated brain capacity to analyse the information they provide. Perhaps Neanderthals had big eyes because they lived under grey northern skies. When they measured Neanderthal eye sockets and did the brain-capacity calculation, Pearce and her colleagues worked out that Neanderthal group sizes would have been smaller than those of anatomically modern humans Pearce and Dunbar examined recent human skulls, finding that people whose ancestors lived at high latitudes had bigger eye sockets than people whose ancestors lived closer to the equator. They had bigger brains too, but only to go with their bigger eyes: Dunbar hastened to say that ‘It’s got nothing to do with their smartness at all.’ Then again, Neanderthals had brains the same size as their anatomically modern contemporaries, so if they adapted to the dim north by increasing the proportion of brainpower they allocated to vision, they would have less brain capacity available for other purposes. Their larger bodies would also add to the neural load, since the more tissue a body has, the more nerves and central processing it requires. As an anthropologist, Dunbar is best known for ‘Dunbar‘s Number’, a figure representing the average maximum number of relationships an individual can meaningfully sustain. It is based on the relationship between the size of the neocortex — the part of the brain whose development is most pronounced in humans — and the size of social groups. A large neocortex allows the brain to process a high volume of social information, and thus to sustain a dense network of relationships. For living humans, the typical maximum is around 150 individuals. When they measured Neanderthal eye sockets and did the brain-capacity calculation, Pearce and her colleagues worked out that Neanderthal group sizes would have been smaller than those of anatomically modern humans. Calculations based on the size of the brain’s frontal lobes, rather than its total volume, give a figure of around 100 individuals. ‘Neanderthal group sizes are almost identical to those of archaic humans who are our common ancestors,’ Dunbar said. ‘And fossil modern humans have predicted social group sizes that are exactly in line with what we see in living modern humans.’ The evidence from living humans does, however, suggest a note of caution about reading too much into Neanderthal skulls. Dunbar himself collaborated on a study that found women had larger social networks than men, even though their brain volumes were slightly smaller. In other words, the size of the case might not, after all, be a reliable indicator of how the brain inside functions. In another earlier study, Dunbar found that the size of people’s frontal lobes affects their ability to ‘mentalise’: that is, to form interrelated beliefs about what is going on in the minds of others. As Dunbar has pointed out, Shakespeare’s Othello requires audiences to believe ‘that Iago intends that Othello imagines that Desdemona is in love with Cassio’. That takes them to four levels of ‘intentionality’, or mental representation, but not to an especially compelling story. To bind the narrative spell, Shakespeare has Iago persuade Othello that Cassio reciprocates Desdemona’s feelings. This raises audiences to a fifth level, which is about the natural limit for most people. (In order to tell the tale, Shakespeare himself would have been operating at the sixth level, which is beyond most of us.) Going by his calculations of their frontal lobe size, Dunbar believes that Neanderthals come out one critical step lower than we do. If the ‘natural mentalising level’ for modern humans is fifth-order intentionality, then ‘for Neanderthals and archaics, it’s fourth-order’, he said. They wouldn’t be able to follow the plot of Othello — and they wouldn’t be able to form groups as large as those of modern humans, because ‘your mentalising ability determines the size of your social network’. That, Dunbar suggests, could have been a fatal constraint in times of ice and turbulent climate. The smaller the group, the smaller the range it would have covered, limiting its opportunities to co-operate with other groups and secure the necessities of survival. And archaeological evidence does indeed appear to confirm that Neanderthal group sizes were smaller, and that they networked across the landscape less widely than modern humans. In Dunbar’s view, this was a fundamentally cultural weakness. ‘The key importance of the mentalising competence is that it affects lots of things that you can do culturally,’ he said. ‘It’s going to affect things like how complex your stories are, how complex your language is.’ In his book Regenesis (2012), the biologist George M Church looks forward to a time when human cloning is both practical and accepted. At that point, he suggests, ‘the whole Neanderthal creature’ could be resurrected through genome engineering and the use of a chimpanzee or ‘an extremely adventurous female human’ as a surrogate mother. This would ‘give Homo sapiens a sibling species that would allow us to see ourselves in new ways. It might give us an inkling into another form of human intelligence, or of different ways of thinking.’ Church’s cloned Neanderthal is at one turn a creature and at another a sibling species, but never a person in himself or herself. They would be an instrument for modern human self-regard, just as the Neanderthal has been since the 19th century, but in living rather than imagined form. In Church’s view, the reintroduction of Neanderthals would also be an endorsement of ‘true human diversity’. Neo-Neanderthals would not be capable of playing a full part in pre-electronic societies, let alone the technologically mediated hypersociality that is becoming the new human condition If the proponents of Neanderthal equality are right, though, neo-Neanderthal children would take up whatever culture they found themselves in, just like other children. They might have some difficulties making themselves understood, if it turned out that their vocal tracts did not enable them to articulate modern human speech clearly, but they would have no difficulties in understanding. Their minds would be modern. On the other hand, if those who see a cognitive gap between Neanderthals and anatomically modern humans are right, Neanderthal children would face difficulties more profound than those that might arise from looking or sounding different. If Pearce, Stringer and Dunbar’s inferences are correct, neo-Neanderthals would grow up in a social world that was largely beyond them. They would not be capable of playing a full part in pre-electronic societies, let alone the technologically mediated hypersociality that is becoming the new human condition. Adapted only to each other, they would become a marginal tribe without even the memory of traditions to draw on, forced to make what kind of culture they could from the babble around them. Perhaps they might end up sharing a reservation with cloned neo-mammoths. Neanderthal de-extinction is an indefinite scientific and cultural distance ahead, but the founding data set for such a venture is already in place. The 2010 draft Neanderthal genome has been followed by a much more complete sequence, obtained from a toe bone found in Denisova Cave in southern Siberia. In March 2013 it was posted online by Svante Pääbo, the pioneer of ancient genomics, and his team at the Max Planck Institute in Leipzig. Denisova Cave has also yielded another ancient genome, this one derived from a finger bone. It appears to represent a previously unknown lineage, a strain of humanity distinct from both Neanderthals and modern humans. ‘I think in the next year or two we will see real high-quality comparisons of those three genomes, pinning down more about what makes us similar and what makes us different,’ said Chris Stringer. With data at this level of detail and accuracy, he believes that ‘we can really start to pin down what it is in the genome that makes a Neanderthal a Neanderthal, a Denisovan a Denisovan, and modern humans modern humans’. The draft Neanderthal genome has already revealed differences from modern humans in genes that might affect cognition – which highlights an uncomfortable reason why Neanderthals matter so much to us: race. Already one non-scientific account, provoked by George M Church’s remarks in Regenesis, shows how the new genomic image of the Neanderthal could subvert perceptions of modern human unity. A Daily Mail article this January argued that, since up to four per cent of DNA in living people of non-African descent can be traced to Neanderthals, ‘if the Neanderthals did have any useful genes for intelligence, we most likely picked them up and honed them over generations’. The implication that ‘we’ do not include people of African descent, who would lack the full complement of genes for intelligence carried by everyone else, is even more insidious for being an unstated by-product of the argument against human genomic engineering. However it unfolds, the Neanderthal story seems destined to retain its racial undertones. It will also remain a story pulled this way and that by fragments whose age and scarcity gives them a power out of all proportion to their size. Another tooth, another toe-bone; another twist and another turn. Researchers will continue to draw opposite conclusions from the same evidence. But the pace of discovery and interpretation, of surprise and synthesis, seems to be gathering an unprecedented momentum. Being Neanderthal hasn’t been this exciting for more than 30,000 years. | Marek Kohn | https://aeon.co//essays/you-might-be-closer-to-the-neanderthals-than-you-thought | |
Architecture | America’s national parks are overrun with cars and visitors – what happened to the spirit of wilderness preservation? | It is early morning, and I am sitting on a boulder near the bottom of Yosemite Falls. I have lost my bearings, meandering around in a maze of paved trails that I never knew existed until today. I worked and lived in Yosemite for about 15 years and thought I knew the place like the back of my hand but these new paths have replaced old dirt trails that had existed for more than 100 years, winding along through a magnificent black oak forest where I used to walk. Many of the trees are now cut down to ‘enhance’ views for the increasing numbers of visitors. There must be several hundred tourists milling around me below the Falls, and 20 or 30 tour buses lying in wait for them to return to their vehicular lair of comfort. I can see not only the upper and lower Yosemite Falls, but also the tumultuous creek flowing toward the centre of the Valley, to the Merced River that is the meandering lifeblood of the plants, animals and ecology of this incomparable place. The river has been constricted for years because of road embankments, water and sewer lines, paved trails, more than 1,000 buildings, and compacted campsites. Water doesn’t flow through the valley like it used to, and this has altered plant and animal species, what types of meadows exist where and why, and has allowed trees to encroach into once open spaces. Meanwhile, the visitors for whom all this infrastructure exists spend about five minutes taking pictures of the Falls, interspersed with chatting with their well-dressed friends. In past years, I have sat here alone and in silence for hours, watching daylight enter the valley or moonlight leave. I try not to judge but I cannot help feeling that something is not right about this place any more. Maybe the something is this. It’s early morning and my eye catches the glint of sunshine rising over the top of Half Dome in the eastern part of the Valley, reminding me of a summer day in 1970 when I climbed this rock formation via its most popular route, which had installed cables to assist people on the steeper portions of the climb. Back then, I saw no one in the vicinity of the cables and no one on top. This year the US National Parks Service (NPS) has initiated a permit system that will allow only 300 people per day to attempt the journey to the top of Half Dome. Climbers use cables to negotiate Half Dome. Photo by Michael MaloneyOr maybe the something is this. I recall the day in 2011 when Don Neubacher, the Superintendent of Yosemite National Park, expressed pride that Yosemite had just received its 4 millionth annual visitor. Visiting parks bolsters local economies and gets people excited about the beauty of nature. But typically there are two months per year when cars in Yosemite exceed parking places. On 2 July 2011, there was a two-hour wait in vehicles to enter the park, and a record total of 7,190 vehicles for 5,000 spaces (with much debate whether 5,000 cars is unacceptably high in the first place). Or maybe the something is this. I can’t stop looking at the bear trap not far away from the Falls. Many black bears in Yosemite have been trapped and relocated because they searched for food and hence acquired the label of ‘nuisance’ bears. In the 1960s up to 40 bears were killed each year in the park; even now between 1 and 5 bears are killed every year by park management. More trapping and killing, of course, took place in parks such as Yellowstone and Glacier. People love national parks, that’s certain. My fear is that they might love them to death. An apocryphal story has it that on 19 September 1870, near the junction of the Firehole and Gibbon rivers in what is now Yellowstone National Park in Wyoming, during an evening campfire with fellow explorers, a Montana lawyer named Cornelius Hedges coined the term ‘national park idea’. In reality, a number of individuals contributed to this new ideal, including the artist George Catlin, the naturalist John Muir, the biologist George Wright, and the entrepreneur George W Marshall. They shared a belief in setting aside federal areas of ‘natural and scenic wonders’ for the use and enjoyment of all people, rather than just for individuals who might capitalise on personal claims on those lands. Two years after Hedges’s apocryphal campfire, Congress passed legislation establishing Yellowstone National Park, and subsequently created another nine national parks. In 1916 the National Park Service Organic Act established a management authority that would administer not only national parks but also national monuments and significant historic sites. The Act describes the purpose of the NPS as being: ‘to conserve the scenery and the natural and historic objects and the wild life therein and to provide for the enjoyment of the same in such manner and by such means as will leave them unimpaired for the enjoyment of future generations’. Currently, there are 59 national parks in the care of the NPS. The service’s goals and purposes have also been influential far beyond American shores, with many other countries modelling their own parks on the NPS legislation, methods of administration, training of park rangers, and methods of interpreting parks to visitors. There are now more than 500 active concession contracts to provide services in national parks, which gross more than $1 billion annually The vision of the national park idea has spread despite being fraught with an ambiguity that influences national park management to this day. For example, although the ‘national park idea’ was rooted in an ideal of participatory democracy through decision-making and policy-making, it never held that parks should be a place for all people or be all things to all people. The NPS seems continually embroiled in contentious policy issues, and the question remains whether the agency can ever fulfil its primary duty to protect parks and the idea that animates them, even in the face of those who clamour for more visitation, access and development. The problem is not new. After the establishment of Yellowstone National Park and other parks which were created prior to the NPS in 1916, it took little time for hoteliers, the Union Pacific Railroad and other transportation companies, mercantilists, and others to provide goods and services to park visitors and, importantly, means of both getting people to parks and keeping them there. Meadows were cleared; fences for livestock erected; roads built, as well as hotels, stores, and administrative and living quarters for staff; later they were joined by downhill ski areas, golf courses, swimming pools, tennis courts, and modern bars, to say nothing of numerous and large campgrounds. As the first director of the NPS, Stephen T Mather, declared in 1917: ‘Scenery is a hollow enjoyment to the tourist who sets out in the morning after an indigestible breakfast and a fitful night’s sleep on an impossible bed.’ By the 1970s, most parks had concessions owned by some of the largest US corporations, which were steadfast in encouraging use and development in the parks. There are now more than 500 active concession contracts to provide services in national parks, which gross more than $1 billion annually. Such concessioners have frequently lobbied the NPS to increase use and development, and tenaciously fought any suggestion of imposing restrictions. Yosemite National Park has more than 1,000 buildings — a third of which belong to NPS, and the rest to concessions — many of them in Yosemite Valley alone, an area of about 21 sq km and one of the most beautiful and unique places on Earth. On a typical summer day, the valley has close to 20,000 visitors and can have massive traffic jams. Here, as in other popular parks such as Grand Canyon in Arizona, annual visitor numbers exceed 4 million per year. Unfortunately, most of the development in parks has occurred in the most scenic and significant conservation areas of the parks — the very places responsible for the creation of the parks 100 years ago — places such as Yosemite Valley, Grant Village in Yellowstone, Wyoming, the South Rim of the Grand Canyon, and many others. A bald eagle in Kenai Fjords National Park, Alaska. Photo by John LemonsIn Desert Solitaire (1968), a polemic about the problems of national parks and the beauty of America’s Southwest, Edward Abbey gores many an ox. He wrote the book after working as a seasonal ranger in Arches National Park in Utah, and accused the NPS of bringing ‘industrial tourism’ to both Arches and Canyonlands, national parks in Utah, by tampering with the land to accommodate development and roads for an unacceptably high number of tourists. Abbey’s targets were numerous: conservation groups who purportedly wanted to conserve the wilderness quality of lands in the Southwest but were guilty of too much compromise with federal and state agencies; local chambers of commerce that promoted development of the areas; even his wives (he had five) who thought he loved his beat-up pickup truck more than he loved them because it took him to the heads of gullies where he then would walk for days. Abbey said that he would rather kill a man than a snake. Not because he loved snakes or hated men. It was, he said, ‘a question of proportion’. Parks should be about proportion, Abbey cautioned. Reluctantly, people are allowed into the great temples of red rocks and red canyons, but proportionality meant that most people stay in the Southwest parks only as visitors. He cautioned people against going to the parks he described in his book, saying that those parks were already gone or going fast. Desert Solitaire was not a travel guide, but an elegy. When the NPS has attempted to reduce park access it has come up against powerful countervailing forces. Yosemite’s 1980 General Management Plan recommended reducing car-accessible campsites by 60 per cent and parking provision by 75 per cent. Such proposals provoke outcries from lobbyists such as Chuck Cushman, the founder and executive director of the American Land Rights Association — a non-profit, public-interest advocacy group based in Battle Ground, Washington — who has made a career of helping private land owners defend their (supposed) property rights against government agencies and fending off conservation ‘threats’ to the use of federal lands. When it comes to discussing NPS policies and actions, Cushman is not one to mince words. ‘What they’re doing is nothing less than stealing a national park from the people,’ he bluntly told a reporter for World Net Daily in 2003. Needless to say, none of the proposals for reducing cars in Yosemite Valley have been carried out. A common interpretation of the NPS Organic Act is that it provides equally for conservation and for use and enjoyment, hence this push and pull between the conservation of parks and the use and enjoyment of them is extremely difficult to resolve. But neither Congress, the courts, nor the NPS ever defined the ambiguous words of the Organic Act. If national parks are truly to be the ‘crown jewels’ of a nation’s scenery, plants and wildlife, the NPS must develop a visionary policy to guide the management of its parks. Parks cannot be all things to all people; if the NPS is to err, it ought to favour conservation of scenery and wildlife at the expense of use. Indeed, the Organic Act has consistently been interpreted by Congress, courts, and park scholars to place conservation (or preservation) of parks as the fundamental fiduciary responsibility of the NPS. On the other hand, the NPS has wide administrative latitude in what it can do as per the Administrative Procedures Act, which automatically gives discretion to the heads of federal agencies unless decisions are arbitrary or capricious. And it is a ‘stakeholder’ agency, meaning that it must try to respond to the views of its stakeholders. Given that those stakeholders range from people such as Edward Abbey to those such as Chuck Cushman and hold such divergent viewpoints, this seems an almost impossible task. Many scholars of parks believe that the agency has failed to develop a strong and inspirational conservation vision at its highest levels, as witnessed by the fact that visitation and development have increased in all the parks over the years, to the detriment of the very scenery, plants, and wildlife that the parks were established to protect. This failure is not due to a lack of discussion about the future and purpose of the national parks of the US. In preparation for the NPS Centennial Celebration on 16 August 2016, the Department of Interior, the NPS, and parks’ concessioners have been issuing reports and holding meetings about the future of our national parks. Some of the reports include The Future of America’s National Parks, Advancing the National Park Idea, the 2012 progress report from America’s Great Outdoors Initiative, A Call to Action by the conference America’s Summit on National Parks: Taking Action for a New Century and Revisiting Leopold: Resource Stewardship in the National Parks. With the exception of Revisiting Leopold (after a 1963 report into wildlife management in the parks), none of the reports mention, except in passing, that we are losing the biggest battle confronting parks, that of conservation versus use and development. Even more importantly, not one of the reports makes a claim for an inspiring vision to guide national parks through the future. And I stress the importance of this last point, because ultimately the battle to save our national parks resides within each of us. The human assault on such places as national parks will not lessen until we change to recognise that we are neither separate from, nor better than, other animals If asked, I would recommend to the NPS Centennial Commission two additional books to serve as the basis for discussions and policies, both of which canvas so much of what national parks should be, while the official reports decidedly do not. Recall that the NPS has never defined, in any concrete manner, ‘conservation’. But Joseph Sax, a professor of law at the University of California, did. In his short but insightful and still-relevant book Mountains Without Handrails (1980), he managed to distil many of the essential problems that had been plaguing national parks since their inception, and still do. Existing levels of use and development in most parks degrade the scenery and natural resources: this is without question. But it is worse: they hinder the very purpose of preserving those things, which is to allow visitors to contemplate and reflect on their connection to the natural world. Sax based his ideas on those of the 19th-century social critic Frederick Law Olmsted, who would have thought large modern hotels and traffic jams were anomalies in parks. Such commercialism and crowding intrudes upon our ability to contemplate the mystery and grandeur of this thing called ‘nature’. In Olmsted’s view, conservation of national parks is incompatible with high levels of use and development because national parks should give the ordinary citizen an opportunity to exercise and educate the contemplative faculty. For that reason, Olmsted wrote in 1865, the establishment of nature parks and public places was ‘justified and enforced as political duty’. The more nature there is in national parks, the greater allowance is made for the free roaming of the human spirit and intelligence: conservation begets freedom. The setting is a precondition for activities that cultivate human independence, curiosity, and self-directed thought. It is only those areas, free from development, that allow contemplation and reflection, and that do not depend for our attention on artificial entertainment found in such places as modern hotels, bars and ski slopes. The ideas of Olmsted and Sax provide a strong basis for a contemplative, reflective and non-consumptive experience in national parks, although their views are decidedly anthropocentric — that is, they centre on human needs rather than those of wildlife or the ecosystems in which they exist. The second book I would recommend is The Fallacy of Wildlife Conservation (1981) by John Livingston, a Canadian biologist. Sax and Livingston come to many of the same conclusions about the need for parks to be free of development and heavy visitation. However, Livingston rejects the idea that conservation should serve human needs (even rarefied ones). He reminds us that the human assault on such places as national parks will not lessen until we change to recognise that we are neither separate from, nor better than, other animals. We are only different. Relatively undeveloped national parks serve to benefit animals, not human beings, no matter how sympathetic those humans might be. When it comes to national parks, technology, politics and complicated public transportation schemes will not provide the solution. Only a change in our own values and what we accept as our place in nature will make any difference. Although the NPS 2012 report Revisiting Leopold provides a reminder about the scientific reasons to protect nature in national parks (and the challenges of doing so), Livingston’s The Fallacy of Wildlife Conservation provides a powerful philosophical rationale to underpin these efforts. It is not, then, the standoff between conservation on the one hand and use and development on the other that is the most significant problem for national parks. Rather, it is the challenge of communicating the founding ideal of the parks to a wider public, and persuading them to expect their national park experience to be an authentic one, with little or no human-manufactured distraction. If national parks are to remain the pinnacle of a nation’s beauty, natural resources and cultural heritage, then they simply cannot be viewed and treated as typical recreation areas. And if such a pinnacle is achieved, then as Edward Abbey said in Desert Solitaire: They will complain of physical hardship, these sons of the pioneers. Not for long; once they rediscover the pleasures of actually operating their own limbs and senses in a varied, spontaneous, voluntary style, they will complain instead of crawling back into a car; they may even object to returning to desk and office and that drywall box on Mossy Brook Circle. The fires of revolt may be kindled — which means hope for us all.Many people hold a memory of experiences in some of the world’s national parks. These memories have helped to foster good times, and sustained people through more mundane or even difficult times: another gift that national parks and other special places can give us, even when we are distant from them. Yet for such treasured memories to be created, national parks must be places in which heightened sensory or aesthetic experiences can be had, either in moments of intense awareness, or over long periods of time in which we familiarise ourselves with the ecology and landscape of a place. These experiences must be sufficiently authentic and profound to etch themselves into our very being. This etching cannot be done in the midst of crowded conditions, excessive development, or traffic jams on narrow park roads. | John Lemons | https://aeon.co//essays/busy-and-degraded-americas-national-parks-are-in-decline | |
Automation and robotics | Computers could take some tough choices out of our hands, if we let them. Is there still a place for human judgment? | In central London this spring, eight of the world’s greatest minds performed on a dimly lit stage in a wood-panelled theatre. An audience of hundreds watched in hushed reverence. This was the closing stretch of the 14-round Candidates’ Tournament, to decide who would take on the current chess world champion, Viswanathan Anand, later this year. Each round took a day: one game could last seven or eight hours. Sometimes both players would be hunched over their board together, elbows on table, splayed fingers propping up heads as though to support their craniums against tremendous internal pressure. At times, one player would lean forward while his rival slumped back in an executive leather chair like a bored office worker, staring into space. Then the opponent would make his move, stop his clock, and stand up, wandering around to cast an expert glance over the positions in the other games before stalking upstage to pour himself more coffee. On a raised dais, inscrutable, sat the white-haired arbiter, the tournament’s presiding official. Behind him was a giant screen showing the four current chess positions. So proceeded the fantastically complex slow-motion violence of the games, and the silently intense emotional theatre of their players. When Garry Kasparov lost his second match against the IBM supercomputer Deep Blue in 1997, people predicted that computers would eventually destroy chess, both as a contest and as a spectator sport. Chess might be very complicated but it is still mathematically finite. Computers that are fed the right rules can, in principle, calculate ideal chess variations perfectly, whereas humans make mistakes. Today, anyone with a laptop can run commercial chess software that will reliably defeat all but a few hundred humans on the planet. Isn’t the spectacle of puny humans playing error-strewn chess games just a nostalgic throwback? Such a dismissive attitude would be in tune with the spirit of the times. Our age elevates the precision-tooled power of the algorithm over flawed human judgment. From web search to marketing and stock-trading, and even education and policing, the power of computers that crunch data according to complex sets of if-then rules is promised to make our lives better in every way. Automated retailers will tell you which book you want to read next; dating websites will compute your perfect life-partner; self-driving cars will reduce accidents; crime will be predicted and prevented algorithmically. If only we minimise the input of messy human minds, we can all have better decisions made for us. So runs the hard sell of our current algorithm fetish. If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment But in chess, at least, the algorithm has not displaced human judgment. The imperfectly human players who contested the last round of the Candidates’ Tournament — in a thrilling finish that, thanks to unusual tiebreak rules, confirmed the 22-year-old Norwegian Magnus Carlsen as the winner, ahead of former world champion Vladimir Kramnik — were watched by an online audience of 100,000 people. In fact, the host of the streamed coverage, the chatty and personable international master Lawrence Trent, pointedly refused to use a computer engine (which he called ‘the beast’) for his own analyses and predictions. The idea, he explained, is to try to figure things out for yourself. During a break in the commentary room on the day I was there, Trent was eating crisps and still eagerly discussing variations with his plummily amusing co-presenter, Nigel Short (who himself had contested the World Championship against Kasparov in 1993). ‘He’ll find Qf4; it’s not difficult to find,’ Short assured Trent. ‘Ng8, then it’s…’ ‘It’s game over.’ ‘Game over!’ Chess is an Olympian battle of wits. As with any sport, the interest lies in watching profoundly talented humans operating at the limits of their capability. There does exist a cyborg version of the game, dubbed ‘advanced chess’, in which humans are allowed to use computers while playing. But it is profoundly boring to watch, like a contest over who can use spreadsheet software more effectively, and hasn’t caught on. The ‘beast’ can be a useful helpmeet — Veselin Topalov, a previous challenger for Anand’s world title, used a 10,000-CPU monster in his preparation for that match, which he still lost — but it’s never going to be the main event. This is a lesson that the algorithm-boosters in the wider culture have yet to learn. And outside the Platonically pure cosmos of chess, when we seek to hand over our decision-making to automatic routines in areas that have concrete social and political consequences, the results might be troubling indeed. At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerised cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves. What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013). More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children? What Marcus’s example demonstrates is the fact that driving a car is not simply a technical operation, of the sort that machines can do more efficiently. It is also a moral operation. (His example is effectively a kind of ‘trolley problem’, of the sort that has lately been fashionable in moral philosophy.) If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment. Meanwhile, as Morozov relates, a single Californian company called Impermium provides software to tens of thousands of websites to automatically flag online comments for ‘not only spam and malicious links, but all kinds of harmful content — such as violence, racism, flagrant profanity, and hate speech’. How do Impermium’s algorithms decide exactly what should count as ‘hate speech’ or obscenity? No one knows, because the company, quite understandably, isn’t going to give away its secrets. Yet rather than pursuing mere lexicographical analysis, such a system of automated pre-censorship is, again, making moral judgments. If self-driving cars and speech-policing systems are going to make hard moral decisions for us, we have a serious stake in knowing exactly how they are programmed to do it. We are unlikely to be content simply to trust Google, or any other company, not to code any evil into its algorithms. For this reason, Morozov and other thinkers say that we need to create a class of ‘algorithmic auditors’ — trusted representatives of the public who can peer into the code to see what kinds of implicit political and ethical judgments are buried there, and report their findings back to us. This is a good idea, though it poses practical problems about how companies can retain the commercial edge provided by their computerised secret sauce if they have to open up their algorithms to quasi-official scrutiny. If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’ A further problem is that some algorithms positively must be kept under wraps in order to work properly. It is already possible, for example, for malicious operators to ‘game’ Google’s autocomplete results — sending abusive or libellous descriptions to the top of Google’s suggestions when you type a person’s name — and lawsuits from people affected in this way have already forced the company to delve into the system and change such examples manually. If it were made public exactly how Google’s PageRank algorithm computes the authority of web pages, or how Twitter’s ‘trending’ algorithm determines the popularity of subjects, then unscrupulous self-marketers or vengeful exes would soon be gaming those algorithms for their own purposes too. The vast majority of users would lose out, because the systems would become less reliable. And it doesn’t necessarily require a malicious individual gaming a system for algorithms to get uncomfortably personal. Automatic analysis of our smartphone geolocation, internet-browsing and social-media data-trails grows ever more sophisticated, and so we can thin-slice demographic categories ever more precisely. From such information it is possible to infer personal details (such as sexual orientation or use of illegal drugs) that have not been explicitly supplied, and sometimes to identify unique individuals. Even when such information is simply used to target adverts more accurately, the consequences can be uncomfortable. Last year, the journalist Charles Duhigg related a telling anecdote in an article for The New York Times called ‘How Companies Learn Your Secrets’. A decade ago, the American retailer Target sent promotional baby-care vouchers to a teenage girl in Minneapolis. Her father was so outraged, he went to the shop to complain. The manager was equally taken aback and apologised; a few days later, he called the family to apologise again. This time, it was the father who offered an apology: his daughter really was pregnant, and Target’s ‘predictive analytics’ system knew it before he did. Such automated augury might be considered relatively harmless if its use is confined to figuring out what products we might like to buy. But it is not going to stop there. One day in the near future — perhaps this has already happened — an innocent crime novelist researching bloody techniques for his latest fictional serial killer will find armed men banging on his door in the middle of the night, because he left a data trail that caused lights to flash red in some preventive-policing algorithm. Perhaps a few distressed writers is a price we are willing to pay to prevent more murders. But predictive crime prevention is an area that leads rapidly to a dystopian sci-fi vision like that of the film Minority Report (2002). In Baltimore and Philadelphia, software is already being used to predict which prisoners will reoffend if released. The software works on a crime database, and variables including geographic location, type of crime previously committed, and age of prisoner at previous offence. In so doing, according to a report in Wired in January this year, ‘The software aims to replace the judgments parole officers already make based on a parolee’s criminal record.’ Outsourcing this kind of moral judgment, where a person’s liberty is at stake, understandably makes some people uncomfortable. First, we don’t yet know whether the system is more accurate than humans. Secondly, even if it is more accurate but less than completely accurate, it will inevitably produce false positives — resulting in the continuing incarceration of people who wouldn’t have reoffended. Such false positives undoubtedly occur, too, in the present system of human judgment, but at least we might feel that we can hold those making the decisions responsible. How do you hold an algorithm responsible? Still more science-fictional are recent reports claiming that brain scans might be able to predict recidivism by themselves. According to a press release for the research, conducted by the American non-profit organisation the Mind Research Network, ‘inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region’. Twice as likely, of course, is not certain. But imagine, for the sake of argument, that eventually a 100 per cent correlation could be determined between certain brain states and future recidivism. Would it then be acceptable to deny people their freedom on such an algorithmic basis? If we answer yes, we are giving our blessing to something even more nebulous than thoughtcrime. Call it ‘unconscious brain-state crime’. In a different context, such algorithm-driven diagnosis could be used positively: according to one recent study at Duke University in North Carolina, there might be a neural signature for psychopathy, which the researchers at the laboratory of neurogenetics suggest could be used to devise better treatments. But to rely on such an algorithm for predicting recidivism is to accept that people should be locked up simply on the basis of facts about their physiology. If we erect algorithms as our ultimate judges and arbiters, we face the threat of difficulties not only in law-enforcement but also in culture. In the latter realm, the potential unintended consequences are not as serious as depriving an innocent person of liberty, but they still might be regrettable. For if they become very popular, algorithmic systems could end up destroying what they feed on. In the early days of Amazon, the company employed a panel of book critics, whose job was to recommend books to customers. When Amazon developed its algorithmic recommendation engine — an automated system based on data about what others had bought — sales shot up. So Amazon sacked the humans. Not many people are likely to weep hot tears over a few unemployed literary critics, but there still seems room to ask whether there is a difference between recommendations that lead to more sales, and recommendations that are better according to some other criterion — expanding readers’ horizons, for example, by introducing them to things they would never otherwise have tried. It goes without saying that, from Amazon’s point of view, ‘better’ is defined as ‘drives more sales’, but we might not all agree. Algorithmic recommendation engines now exist not only for books, films and music but also for articles on the internet. There is so much out there that even the most popular human ‘curators’ cannot possibly keep on top of all of it. So what’s wrong with letting the bots have a go? Viktor Mayer-Schönberger is professor of internet governance and regulation at Oxford University; Kenneth Cukier is the data editor of The Economist. In their book Big Data (2013) — which also calls for algorithmic auditors — they sing the praises of one Californian company, Prismatic, that, in their description, ‘aggregates and ranks content from across the Web on the basis of text analysis, user preferences, social-network-related popularity, and big-data analytics’. In this way, the authors claim, the company is able to ‘tell the world what it ought to pay attention to better than the editors of The New York Times’. We might happily agree — so long as we concur with the implied judgment that what is most popular on the internet at any given time is what is most worth reading. Aficionados of listicles, spats between technology theorists, and cat-based modes of pageview trolling do not perhaps constitute the entire global reading audience. So-called ‘aggregators’ — websites, such as the Huffington Post, that reproduce portions of articles from other media organisations — also deploy algorithms alongside human judgment to determine what to push under the reader’s nose. ‘The data,’ Mayer-Schönberger and Cukier explain admiringly, ‘can reveal what people want to read about better than the instincts of seasoned journalists’. This is true, of course, only if you believe that the job of a journalist is just to give the public what it already thinks it wants to read. Some, such as Cass Sunstein, the political theorist and Harvard professor of law, have long worried about the online ‘echo chamber’ phenomenon, in which people read only that which reinforces their currently held views. Improved algorithms seem destined to amplify such effects. Some aggregator sites have also been criticised for paraphrasing too much of the original article and obscuring source links, making it difficult for most readers to read the whole thing at the original site. Still more remote from the source is news packaged by companies such as Summly — the iPhone app created by the British teenager Nick D’Aloisio — which used another company’s licensed algorithms to summarise news stories for reading on mobile phones. Yahoo recently bought Summly for $USD30 million. However, the companies that produce news often depend on pageviews to sell the advertising that funds the production of their ‘content’ in the first place. So, to use algorithm-aided aggregators or summarisers in daily life might help to render the very creation of content less likely in the future. In To Save Everything, Click Here, Evgeny Morozov draws a provocative analogy with energy use: Our information habits are not very different from our energy habits: spend too much time getting all your information from various news aggregators and content farms who merely repackage expensive content produced by someone else, and you might be killing the news industry in a way not dissimilar from how leaving gadgets in the standby mode might be quietly and unnecessarily killing someone’s carbon offsets. Meanwhile in education, ‘massive open online courses’ known as MOOCs promise (or threaten) to replace traditional university teaching with video ‘lectures’ online. The Silicon Valley hype surrounding these MOOCs has been stoked by the release of new software that automatically marks students’ essays. Computerised scoring of multiple-choice tests has been around for a long time, but can prose essays really be assessed algorithmically? Currently, more than 3,500 academics in the US have signed an online petition that says no, pointing out: Computers cannot ‘read’. They cannot measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organisation, clarity, and veracity, among others. It would not be surprising if these educators felt threatened by the claim that software can do an important part of their job. The overarching theme of all MOOC publicity is the prospect of teaching more people (students) using fewer people (professors). Will what is left really be ‘teaching’ worth the name? One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice. If you are feeling gloomy about the automation of higher education, the death of newspapers, and global warming, you might want to talk to someone — and there’s an algorithm for that, too. A new wave of smartphone apps with eccentric titular orthography (iStress, myinstantCOACH, MoodKit, BreakkUp) promise a psychotherapist in your pocket. Thus far they are not very intelligent, and require the user to do most of the work — though this second drawback could be said of many human counsellors too. Such apps hark back to one of the legendary milestones of ‘artificial intelligence’, the 1960s computer program called ELIZA. That system featured a mode in which it emulated Rogerian psychotherapy, responding to the user’s typed conversation with requests for amplification (‘Why do you say that?’) and picking up — with its ‘natural-language processing’ skills — on certain key words from the input. Rudimentary as it is, ELIZA can still seem spookily human. Its modern smartphone successors might be diverting, but this field presents an interesting challenge in the sense that, the more sophisticated it gets, the more potential for harm there will be. One day, the makers of an algorithm-driven psychotherapy app could be sued by the survivors of someone to whom it gave the worst possible advice. What lies behind our current rush to automate everything we can imagine? Perhaps it is an idea that has leaked out into the general culture from cognitive science and psychology over the past half-century — that our brains are imperfect computers. If so, surely replacing them with actual computers can have nothing but benefits. Yet even in fields where the algorithm’s job is a relatively pure exercise in number- crunching, things can go alarmingly wrong. Indeed, a backlash to algorithmic fetishism is already under way — at least in those areas where a dysfunctional algorithm’s effect is not some gradual and hard-to-measure social or cultural deterioration but an immediate difference to the bottom line of powerful financial organisations. High-frequency trading, where automated computer systems buy and sell shares very rapidly, can lead to the price of a security fluctuating wildly. Such systems were found to have contributed to the ‘flash crash’ of 2010, in which the Dow Jones index lost 9 per cent of its value in minutes. Last year, the New York Stock Exchange cancelled trades in six stocks whose prices had exhibited bizarre behaviour thanks to a rogue ‘algo’ — as the automated systems are known in the business — run by Knight Capital; as a result of this glitch, the company lost $440 million in 45 minutes. Regulatory authorities in Europe, Hong Kong and Australia are now proposing rules that would require such trading algorithms to be tested regularly; in India, an algo cannot even be deployed unless the National Stock Exchange is allowed to see it first and decides it is happy with how it works. Here, then, are the first ‘algorithmic auditors’. Perhaps their example will prompt similar developments in other fields — culture, education, and crime — that are considerably more difficult to quantify, even when there is no immediate cash peril. A casual kind of post-facto algorithmic auditing was already in evidence in London, at the Candidates’ Tournament. All the chess players gave press conferences after their games, analysing critical positions and showing what they were thinking. This often became a second contest in itself: players were reluctant to admit that they had missed anything (‘Of course, I saw that’), and vied to show they had calculated more deeply than their adversaries. On the day I attended, the amiable Anglophile Russian player (and cricket fanatic) Peter Svidler was discussing his colourful but peacefully concluded game with Israel’s Boris Gelfand, last year’s World Championship challenger. Juggling pieces on a laptop screen with a mouse, Svidler showed a complicated line that had been suggested by someone using a computer program. ‘This, apparently, is a draw,’ Svidler said, ‘but there’s absolutely no way anyone can work this out at the board’. The computer’s suggestion, in other words, was completely irrelevant to the game as a sporting exercise. Now, as the rumpled Gelfand looked on with friendly interest, Svidler jumped to an earlier possible variation that he had considered pursuing during their game, ending up with a baffling position that might have led either to spectacular victory or chaotic defeat. ‘For me,’ he announced, ‘this will be either too funny … or not funny enough’. Everyone laughed. As yet, there is no algorithm for wry comedy. | Steven Poole | https://aeon.co//essays/which-decisions-should-we-leave-to-algorithms | |
Religion | It must have been one of the last Passovers to be celebrated by the Jews of Damascus, before the last families left | In 2004 Damascus was on the cusp of modernity. The first privately operated bank had just been established, plans for a securities exchange were underway, the internet was in its infancy, and the Old City’s first boutique hotel was getting ready to open its doors. All the while the Al-Assads, Syria’s first couple — if lately its most detested — courted Western hearts. That spring I travelled to Syria to document its archaeological heritage; but I was also lucky enough to celebrate Passover in the home of a local family. Passover, with its universalist message of a people’s liberation from slavery, is the most celebrated of Jewish festivals. It commemorates the ancient Israelites’ flight out of Egypt and, as such, it records the birth of Israel as a nation, ahead of its long march to becoming a homeland. At Passover, Jews are reminded of their deliverance via a unique covenant that obligates every Jew to ‘tell thy son on the day saying, it is because of what the Lord did for me when I came out of Egypt’ [Exodus 13.8]. This duty is met in several ways: through a liturgical compilation of songs, folk stories, prayers and biblical passages, known as the Haggadah; by the ritual consumption of symbolic foods, known as the Seder; and by the re-enactment of the Biblical Exodus from Egypt. Through these ritualised processes, past is merged with present as the ancient stories of liberation become relevant to each individual. The means of affecting this time-travel is metaphor, with its innate ability to draw upon the collective memory and to ‘telescope’ the familiar with the unfamiliar. Through metaphor, Passover becomes perennially relevant: it has timeless appeal. And so, during my travels in Syria in 2004, I was thankful that the US Consulate in Damascus understood my yearnings to experience its ritual. An official there arranged an introduction to a Jewish family, and I duly met Sami Kabariti outside a restaurant in the Christian Quarter of the Old City. Gaunt, and noticeably twitchy, Sami stood out from the crowds in his feast-day suit. He questioned me nervously, eager to establish that I was not a journalist before he invited me into his home. He said he was unemployed at the time, though I later heard he’d trained as a dentist. My sense, in fact, was that his role was to represent the Jewish community at the behest of the regime. From the restaurant, it was a short walk to the crumbling Harat Al-Yahud, the traditional Jewish Quarter, where the Kabariti family lived in a traditional, but simple, Ottoman-styled home. For the Seder meal itself, we were confined to a tight space, a small dining room lined with benches, with an intractable table in the centre. On it was a stack of three matzot — flat pieces of unleavened bread, the upper and lower of which symbolised the manna provided for Israelites in the wilderness. Khudur Kabariti, Sami’s elder brother and the head of the family, commenced the feast. He raised the middle matzah, recited the prayer of Maggid, and then broke the matzah in two so that the ‘the bread of affliction’ might be remembered both before and after the ritual meal. ‘What was that I just ate?’ I asked my hosts. ‘Kameh, kaaa-meh. Mushroom. It needs thunder,’ was the reply On the copper Seder plate before us were foods to remind us of redemption from, and bondage in, Egypt: Marror (horseradish), the ‘bitter herbs’ of slavery; Karpas (parsley or anything else green and leafy), the springtime plants found by the Israelites in Egypt’s desert; Charoset (or Halek), a thick date-and-nut paste that represents the mortar of monuments built under conscription; and the Korban Pesach, a roasted shank bone, which commemorates the Temple sacrifice in Jerusalem. Salt water, a symbol of the tears of oppression, was also at hand. The Haggadah divides the Seder into two parts that turn on the themes of persecution and deliverance. But it is more than a victimology or ‘salvation history’. It is didactic and dialectic, and it elicits meaning for young and old alike. For many Jewish families, the highlight of the evening is when the youngest child is raised to pre-eminence and asked to answer the ‘Four Questions’, much as a student might be quizzed in a Greek Symposium. It is the child, not the parent, who clarifies ‘Why is this night different from all other nights?’ Jews are notorious for wanting to skip or rush through the various ritual consecrations and recitations of history, which moved the American novelist Jonathan Safran Foer to produce a new edition of the Haggadah. His book, New American Haggadah (2012), is an honouring rather than rewriting that, Foer hoped, would entice his co-religionists to ‘linger’. Our main meal in Damascus (over which we lingered plenty) consisted of a very mysterious stew. ‘What was that I just ate?’ I asked my hosts. ‘Kameh, kaaa-meh. Mushroom. It needs thunder,’ was the reply. Little did I know that I had been served the rare delicacy of desert truffles. These have a history more ancient than Passover itself, and are referenced as far back as the 18th century BC in the cuneiform letters of Assyria’s king Zimri-Lim. The philosopher Pliny, the poet Juvenal and even the Prophet Muhammad also mention them. In fact, in early Islamic interpretation, the Prophet ordained that the desert truffle was the manna that God gave the Israelites during their sojourn in the desert. Today, Bedouins believe that when bolts of lightning hit the ground, followed by loud claps of thunder, as my host indicated, truffles will be brought forth. Now it was my turn to be questioned. ‘Is it better in London or America?’ and ‘How much do you pay for your rent?’ asked my host. I assumed that Khudur was merely curious about the cost of living in London, not realising that my host was trying to leave Syria, and was eager to determine the most economical city for his family. His questions echoed in theme the Four Questions in the Haggadah that remind the Jews where they’ve come from and where they’re heading — confirming, in essence, their identity as a people. They had everything to do with liberation and exodus; and they found a deep resonance in the complex modern history of Syria and its Jews. Historically, Jews living in Islamic lands were better treated than Jews in Europe. But there were notable periods of darkness, and the period of Mamluk rule (1250-1516) stands out. The Mamluks were a caste of warrior slaves, originally from Georgia, who converted to Islam and were given military and administrative duties. The last of the Mamluks unleashed a wave of anti-Semitism in Arab lands in the 1820s and 1830s. Then, in 1831 Egypt’s Muhammad Ali invaded Syria and, under his watch, Christian anti-Semitism, coupled with popular Muslim anti-Jewish sentiment, spilt onto the streets in what became known as ‘The Damascus Affair’ of 1840. In February that year, the Capuchin monk, Father Thomas, and his servant, Ibrahim, disappeared without trace. The Capuchins wasted no time circulating the ancient blood libel accusation: that Jews had murdered both men in order to use their blood for the rites of Passover. In all probability, the murders resulted from a soured business deal with a Muslim muleteer. Yet, in large part, suspicion fell upon the Jews because of the prejudices of the two investigators: the French consul, Ulysse de Ratti-Menton; and the Egyptian governor of Syria, Sherif Pasha, who courted the sympathies of the French, and so allowed the accusations to fester. Eventually, after confessions were extorted under torture, eight prominent members of the Jewish community were charged with ritual murder. All were imprisoned. Of these, several died under torture and one was forced to convert to Islam, while the ancient synagogue of Elijah at Jobar was desecrated at the hands of a vengeful mob. The blood libel is a grotesque perversion of the Temple sacrifice of Pesach — the ‘sacrifice of exemption’, in which the blood of a lamb was painted on the doorposts of Jewish households as a sign to the Angel of Death to pass them over on his mission to slay the firstborn of the ancient Egyptians. In the hands of Judaism’s detractors, this dependence on sacrificial blood is contorted, and the blood, claimed as lamb’s blood, is assumed to be of human origin. Hence the link with ritual murder. It is estimated that there have been more than 150 cases of blood libel that have resulted in violence against the Jews. Jews continued to live and thrive in Syria, in spite of the perturbations caused by the Damascus Affair. Up until the Aleppo pogrom of 1947, there were 30,000 Jews in the country; in the wake of the Israeli War of Independence, this number decreased by half. Over the next two decades, a further 10,000 Jews fled Syria. Thereafter, a ban on Jewish emigration was in place: under president Hafez Al-Assad, Jews were, in effect, held captive, the de facto pawns in Syrian foreign policy. When the ban was finally lifted in 1992, the majority of Syria’s remaining 5,000 Jews left. When I visited in 2004, there were fewer than 60 Jews left, practically all of them in Damascus. The ritual allows Jews to remove themselves from the banality and restrictions of daily life in Syria — and define themselves on their own terms While the Assad regime has, for the most part, tolerated Judaism out of traditional Islamic deference to Al’Kitab, or the non-Muslim ‘people of the book’, occasionally more overt hatred has flared. A prime example is the book The Matzah of Zion (1983), in which Mustafa Tlass vividly revisits the Damascus Affair. Alarmingly, its author was Syria’s defence minister for more than 30 years, until 2004. At a more mundane level, Jews are routinely harassed by the Mukhabarat, the secret police, using extreme forms of physical and electronic surveillance. I might not have heard it myself, and my hosts were at pains to hide it, but that harassment was definitely present at our Seder. Over the course of the evening, the phone rang three times. Each time it was Ahmed, the Mukhabarat designate. But even Ahmed could not control the power of ritual to forge communitas; it chooses the assembled, whether the state approves or not. For ritual offers a counterpart to Chronos, the god of historic or profane time, with his man-made milestones marked in linear fashion. Instead, ritual is the handmaid of Cronus. Cronus, later identified with Kairos, was the god of recurring time — a time measured in Saturnalian rites of seasons and cyclical festivals. Kairos disregards Chronos’ linear markers and milestones, and so transcends profane time. It speaks of the kind of sacred temporality first elaborated by the French anthropologist Arnold van Gennep in The Rites of Passage (1909), and further explored by the British cultural anthropologist Victor Turner in The Forest of Symbols (1967). In their views, time does not flow or march, it oscillates around three poles. In the first stage of ritualised behaviour, the participants detach themselves from ordinary or profane time and enter into a fictive or sacred realm: in Passover, this occurs the evening before the feast, when all leavened products are removed from the home. In the final phase, once the ritual meal has concluded, the jubilant participants must return to the life they left behind the day before. Yet it is in the middle phase, the ‘liminal’ or ‘threshold’ phase, where dissent is at its most potent. This phase designates the ambiguous state of liminality, where the initiate undergoes a role-reversal. Here, time flows backwards as the participants relive the triumphs of their ancestors in coming out of Egypt. It is a time of invigoration, when ‘power’ is conferred and exemplified through the mimetic process. For the purpose of this discussion, the ritual allows Jews to remove themselves from the banality and restrictions of daily life in Syria. They defy their subjugate status in society and define themselves on their own terms, as the victors not the vanquished. Adversity, as illustrated through the Haggadah, can be overcome. Metaphor is the agent of dissent here. As the poet Samuel Taylor Coleridge has noted, through metaphor, our imagination dissolves. It diffuses and it dissipates only to re-create, reconciling what is seemingly opposite and contrary. In the scribe’s hands, metaphor is a subversive tool, since what we dare to imagine can never be fully known or constrained. It is endless. According to Aristotle, the ‘greatest thing by far is to be a master of metaphor’. And yet what metaphor conveys cannot be said directly: its subterfuge is ritual. Thus, even if the Exodus occurred only once in history, through ritualisation at the Seder the distant past reoccurs in the present. The Seder, in other words, reaffirms the special covenant, allowing each person to regard themselves as though they themselves have emerged, newly freed, from Egypt. My Seder with the Kabaritis did not conclude with the customary expression of hope ‘L’shana haba’ah b’Yerushalayim’ (‘Next year in Jerusalem’), but with a drama. Each of us in turn wrapped a matzah in a napkin and slung it over the left shoulder as if it were a sack of all our worldly goods. We responded to the question ‘Where are you coming from?’ with ‘Mizr!’ (Egypt). Then, with the sack over our right shoulder, came the second question: ‘Where are you going to?’ We answered: ‘Yerushalayim!’ (Jerusalem), before passing the wrapped matzah to the next in attendance. And then it was time for me to pack my own, real, bags and be on my way. Today Damascus is a relic of my travels. My hosts are long gone, too: to Brooklyn, I am told. The Old City synagogues are abandoned, and the ancient village of Jobar, now a suburb of Damascus, has fallen victim once again, albeit to a very different conflict. My attention is now taken up with reporting the demise of historic Jewish Heritage sites in the Middle East. Should Damascus fall, as many of us believe it will, there remains the very real possibility that there will soon be no trace of a Jewish past in Syria at all. It is tragic but not unique. For now, all I can do to calm my fears is to piece together the evidence of bygone years and invoke Mnemosyne, the goddess of Memory, to whom I am also captive. | Adam Blitz | https://aeon.co//essays/passover-in-damascus-carries-a-bittersweet-savour-of-the-past | |
Art | The Arab Spring began in Tunisia, and now its artists and dancers are keeping the flame of protest alive | The Tunisian torture choir, as I dubbed them for convenience, should have been competing for sound-space with the group of rappers and streetdancers wearing V for Vendetta Guy Fawkes masks. But somehow it all blended pretty well amid the sprawling outdoor space at Tunisia’s World Social Forum, an annual gathering of international civil society organisations. This was the forum’s first meeting in an Arab country — appropriately enough, the Arab country that kicked off a chain of revolutions two years ago. The choir, a group of men chained together and dressed in prison uniforms, were singing about Tunisia’s former political prisoners — more than 3,000 at recent count, incarcerated and horribly abused by the former regime; now released, but adrift and unsupported, they are struggling to re-enter society. The rappers, mostly Tunisian students, were meanwhile spitting rhymes about the economy, unity, employment, and periodically rousing the crowd to chants of: ‘Work! Freedom! Dignity!’ These were the demands of the Tunisian revolution that toppled the Western-backed dictator Zine El Abidine Ben Ali in January 2011. The choking regime might be gone, but those three demands are still unmet. When I got closer to the all-male political choir — tuneful chanters, really — I could see two members acting out torture scenes, with one using a stick to fake-beat and fake-contort the other’s body into a stress position. Nearby, a group of Palestinians were chanting and debke dancing — Levantine folk-dancing — aware that Palestine was on everyone’s mind at this meeting of global justice movements. Meanwhile, within a group of forum delegates next to us, an argument had broken out over Syria, one of many heated discussions about that conflict during the forum’s three days — some of them, by all accounts, frightening and aggressive — at once a symptom and an emblem of the visceral divisions over that terrible war. As forum participants descended on Tunis from around the world at the end of March, they were met with relief in a country where the tourism industry has tanked, in part due to the Western media’s fear-mongering over violent political tensions. There is suddenly a buzz in streets that have been too quiet ever since foreign offices started issuing warnings over travel to Tunisia. The cacophony of politically-themed protest culture bursting onto the streets these days is not necessarily directed at the visitors, however. Nor is it particular to this week. It has been growing in the capital and beyond since the revolution, not least because it now can, after all the years of repression. But also because there are things to be said, directly to the people, that no other medium can convey as well. Here in Tunisia, artists of all types are asserting public ownership of spaces once controlled by the state, reclaiming the streets, reasserting the significance of protest as a political ideal. That sentiment is fizzing up everywhere in the capital: among Tunisians performing a local spin on the Harlem Shake; in the streetdance clusters on side streets; among the teens who burst into rowdy political song on public transport, banging train doors as percussion, and indulged by other passengers even if the din makes their babies cry. It’s a conscious, proud, humorous repossession of spaces that were once out of bounds, used exclusively by the regime to push its own self-aggrandising doctrines. ‘Before, we couldn’t speak or do anything in the public space: it was used as a space for propaganda,’ said Selima Karoui, a visual artist, university teacher and journalist with the Tunisian collective blog, Nawaat, speaking of a time when demonstrations or public displays were pretty much by official appointment only. If you were doing something on the street, it was assumed that the dictator had put you there. Even today, people often assume at first that public performances might be something to do with remnants of the old regime. But having grown up with state censorship, and the self-censorship that inevitably follows, some Tunisians are beginning to let go. ‘It is normal to feel the need to express oneself in public,’ says Karoui. ‘It is prehistoric. Innate.’ ‘We don’t need the system to recognise us,’ Ben Yahmed told me. ‘We need the people to recognise us’ One group calling itself Art Solution stages seemingly spontaneous eruptions of dance in places where dance doesn’t ordinarily belong: the edges of the barbed-wire barriers surrounding the Ministry of the Interior (still hated for all the past incarcerations, torture and instances of police brutality), and in central Tunis or the capital’s main market and old medina. YouTube clips testify to Art Solution’s powerful effect in public spaces: they show all types of ‘danseurs citoyens’ or citizen dancers — from the smiling man in his work-stained clothes to the lady who loosens her headscarf to join the darbouka drums that beat to a blend of traditional and modern dance. These are crowds that watch and interact — which is exactly the desired outcome. ‘Nobody can resist,’ said Bahri Ben Yahmed, one of Art Solution’s founders. ‘When men and women in the crowds join me in the dance, I feel the connection as though we are family.’ The idea here is to ‘democratise culture’ — to spread its reach from the elites into other social layers, with performances for which permits are deliberately not sought, partly as a political statement and partly as an indication of intent. ‘We don’t need the system to recognise us,’ Ben Yahmed told me. ‘We need the people to recognise us.’ There is something else going on here, too, a subtext but just as strong. To take part is to protest; the act of participation is itself an act of empowerment, an embracing of new-found democratic rights. Several of Art Solution’s YouTube clips begin with the words of Stéphane Hessel, the German-born writer, diplomat, concentration camp-survivor and French Resistance fighter, who died in February this year: ‘Créer, c’est résister. Résister, c’est créer’ (To create is to resist. To resist is to create). It’s hard to imagine Hessel unhappy with where his message landed: with progressive artists using the medium of dance to awaken a desire for cultural and social rights in the midst of Tunisia’s postrevolutionary process. It’s a process in which battle lines are constantly being drawn up and then torn up, as the right to offend collides with the right to be offended. Ben Ali’s legendary repression was especially hard on Islamists, who were stuffed into underground prison cells, while, at street level, religious dress such as the headscarf was banned. Now Ennahda, an Islamist party, is leading a three-way government coalition, following Tunisia’s democratic elections in October 2011, the first since the country’s independence in 1956. There has also been a visible rise in public religiosity. Ennahda says it won’t impose Islamic values on society, but neither, apparently, will it stop hard-line religious groups that try to do so by forceful means. In June last year, ultrareligious Salafist demonstrators disrupted ‘Le Printemps des Arts’, an art exhibition held in La Marsa, a wealthy beach town just outside Tunis. The demonstrators sabotaged works they deemed offensive to Islam, scrawling on them ‘death to blasphemous artists’, before clashing violently with police on the streets. The work they judged most awful was one by Mohamed Ben Slama that spelt out ‘Allah’ in plastic ants crawling out of a child’s schoolbag. Equally offensive was ‘Let him who has not’, an installation by Nadia Jellasi of veiled mannequins emerging from a pile of stones. Clearly, the works were intended to be talking points, but the Salafi protesters wanted only to shut down the conversation. Now the two artists are facing prison sentences of up to five years for charges of ‘harming public order and morals’. ‘Inside, people know what art is, but after 23 years of censorship, they forgot’ The La Marsa incident is part of a wave of religiously motivated attacks on artists, intellectuals and journalists that culminated in February this year with the assassination of Chokri Belaid, a prominent opposition figure and vocal critic of Ennahda. In addition, a spate of charges have been made against artists such as the rapper Weld el-15, sentenced to two years in jail for releasing a rap song that calls the police ‘dogs’ (he has since gone into hiding). And two members of the art collective Zwewla (‘the poor’) were recently cleared of charges of a public order offence for putting graffiti on a wall in the city of Gabes that read: ‘The people want rights for the poor.’ Tunisia’s Islamist-majority government, once horribly repressed, appears to see no irony in letting religious extremists repress their enemies in turn. The government itself wants curtailments on freedom of speech written into the constitution, and it doesn’t want scathing political commentary hitting the nation’s airwaves, or emblazoning city walls. Meanwhile, laws curtailing public protest and expression that were a feature of the Ben Ali regime have yet to be revoked. Why is the government so frightened about all this popular, cultural expression? True, it’s spontaneous, unpredictable and chaotic — traits that authoritarians find innately worrisome. But there is something deeper going on in Tunisia, a country in the midst of defining the fundamental building blocks of its identity. As the debate continues to rage (and enrage), the trouble for the standard political parties in postrevolutionary Tunisia, including the religious ones, is that party politics is perceived as tired and empty. In stark contrast, cultural protests are imbued with the open, dynamic energy of a new generation embracing brand-new liberties. In the battle for hearts and minds, there’s little contest over which approach is more appealing. ‘We have a better way of talking. It touches more people, it causes more reflection,’ the photographer Rim Temimi told me. Temimi, whose pictures of the Tunisian revolution have been relayed around the world, sees the flourishing of protest art as part of an awakening, or reconnection, for Tunisians: ‘Inside, people know what art is, but after 23 years of censorship, they forgot.’ While politicians and local media are caught in a tussle between what is religious and what religion’s role in society ought to be, art has a habit of bringing the focus back onto things that actually matter to the public. It is striking that this word ‘awake’ keeps coming up with artists, set against Ennahda, an Islamist party whose name means ‘renaissance’, or rebirth. The artist and Nawaat blogger Selima Karoui, for example, told me: ‘The government doesn’t want artists to awaken people’s consciousness to another point of view, to writing about the poor, or the social reality of unemployment.’ Both movements wish to craft a new Tunisia, but the artists keep reiterating that the Islamist politicians are out of touch and have misjudged the mood of the nation. A self-styled day of herb-selling created the edifyingly absurdist scene of men and women strolling the streets of Tunis with clumps of parsley looped over their ears Certainly, the politicians did not expect all this cheeky irreverence, the artists’ humour and the situationist-style protests that would erupt in response to its decrees. Last year, the government was wrong-footed after it tired of endless protests in the capital outside the loathed Ministry of the Interior, and erected barricades of coiled barbed wire along the length of Habib Bourguiba Avenue. The street touched a raw nerve with Tunisians. As the focal point of the revolution, it became, not surprisingly, the natural rallying ground for ensuing protests. There was an immediate civic response to the barbed-wire barricades, quickly orchestrated over social media. On 18 April last year hordes of people turned up with books to read along the avenue, and nearby booksellers decided to distribute free volumes to passers-by. The silent mass action loudly stated: these streets are ours. A few months ago, the same avenue hosted another piece of theatre-as-protest, when citizens of the country whose uprising was labelled the ‘jasmine revolution’ staged the ‘parsley protests’. This was sparked off after an independent TV station that had been critical of Ennahda found itself in dire financial difficulties. Station representatives said that advertisers had been phoned and warned off advertising with the station. When the channel, Elhiwar Ettounsi (‘the dialogue’), called for donations on Facebook, they were flamed by Ennahda supporters claiming the drive was as futile as trying to sell parsley on the streets. Rising to the challenge, on 28 February the station took to the streets for a self-styled day of herb-selling. It raised around £47,000, and created the edifyingly absurdist scene of men and women strolling the streets of Tunis with clumps of parsley looped over their ears, like leafy forelocks. Everyone was surprised by the high turnout but, again, it just signalled the public appetite for pushback against perceived suppression (it was Ennahda party supporters who were accused of making the calls to put off the TV station’s advertisers). ‘People are awake,’ the photographer Rim Temimi told me, using that buzzword again to comment on the parsley protest. ‘It isn’t just artists and intellectuals: it is people, practising democracy and understanding politics. They know what they want.’ Photo by Fethi BelaidWhat Tunisians want, according to one graffiti artist, is for the country to steer away from its current, uncharacteristic and divisive arguments over religion. ‘We used to live all together, and the problem after the revolution is how we can stick together,’ said the ‘calligrafitist’ eL Seed from Gabes, who last year decorated one side of a minaret at his home city’s mosque with a verse from the Qur’an about tolerance. He said the work was about bringing people together and democratising art. Tunisia’s current tussles over religious identity are being furiously fanned and exaggerated by the local press and, even more so, the French media (former colonisers and habitual meddlers). But they are a distraction, according to eL Seed. ‘They are setting people against one another, to hide the real problems of unemployment, of the economy,’ he told me. If this has caused a crisis of confidence in Tunisia, eL Seed thinks art is the solution. ‘You get tough, you get pride and a sense of honour. You open a dialogue and ask real questions. True art really can awaken people and give them the feeling that they can do something with their minds and their hands.’ Here again is the truth-teller function of art — the thing that keeps it real, and forces the focus onto real problems, not the knee-jerk ones invented to perpetuate fractious infighting. It is what makes this new culture so worrisome to those in power. But it is also what keeps the culture-motor running, this task of bringing clarity. From those rappers at the World Social Forum to the streetdancers at the capital’s medina, it is about keeping expectations raised, free speech in flow, and those cornerstones of the revolution — ‘Work! Freedom! Dignity!’ — alive in people’s hearts. It is not just the taste of freedom that has awakened so much protest culture in Tunisia, it is the response to a clear call of duty. It is the graffiti artist’s job to annotate city walls with word of the forgotten poor, the rapper’s task to rhyme about police repression, and the dancer’s purpose to remind Tunisians that they own the streets. People have been silent for too long. Now it’s time to just keep talking. | Rachel Shabi | https://aeon.co//essays/first-came-revolution-then-rappers-graffiti-and-streetdance | |
Death | Is cryonics an ambulance into the future or the latest twist on our ancient fantasy of rebirth? | Boulder, Colorado, 1989: the young Norwegian’s phone rang. On the line was his mother in Oslo, where it was already evening; dark with a November chill. She needed to tell him that his beloved grandfather had gone to take a short nap. But he had not woken up: he had had another heart attack in his sleep. He was dead. The grandfather, Bredo Morstøl, had been a vital, vigorous man, a nature-lover who skied and painted well into old age. He had taken his grandson, Trygve Bauge, with him as soon as the boy was old enough, spending the summers fishing and hiking in the mountains, staying in the high-country cabin that Morstøl had built with his own hands. Not even an earlier heart attack had stopped this active, outdoor life. From his grandfather, Bauge had learnt independence and resilience. Neither man was inclined to give in to ill fortune. Now Morstøl himself could no longer fight back against the assaults of fate, but his grandson could. The young man persuaded his distraught mother that burial or cremation would be premature, acts of resignation. Bauge had not given up hope of saving his grandfather, even though he was many thousands of miles away. As a child he had read about the idea of suspended animation in a popular science book he had found in his grandfather’s library. Ever since, he had been fascinated by the idea that the terminally ill or even the newly dead could be preserved at super-low temperatures. Then they could simply wait until the day came when technology was advanced enough to repair a failed heart, or even reverse the ravages of ageing itself. What was death, anyway? So Bauge gave his mother detailed instructions to deep-freeze grandpa Morstøl. Then they just had to get him to America. The procedure for preserving whole human bodies by freezing is known as cryonics. Many believe it is an idea whose time has come. Their logic is simple. There are many diseases that cannot be cured by contemporary medicine, such as cancer or Alzheimer’s, so we cannot currently hope to delay death indefinitely. Yet scientific progress is rapid and even appears to be accelerating, to the extent that we might reasonably hope such diseases will find cures in the future. To have a shot at immortality, all we must do is reach that future. Like most visionaries, his ambition inhabits a middle space between the prophetic and the pathological For those who simply cannot stay alive long enough, freezing (more formally, ‘cryopreservation’) is a well-established way of delaying degeneration and keeping bodies fresh. Doing this to recently deceased humans — cryonics — is therefore an ambulance into the future, a way of transporting the terminally ill to a time and place where they might be healed. To those who are unconvinced that disease, old age and the damage done by freezing will ever be entirely curable, cryonicists such as Bauge say this: the odds of you rising again from the freezer might not be high, but they are surely better than the odds of you rising again from a small urn full of ashes. The logic of cryonics is therefore a little like Pascal’s Wager. The 17th-century French philosopher Blaise Pascal argued that we don’t know whether God exists but, if He does, a pious life can earn you infinite reward in heaven in return for a relatively small investment in this world. Similarly, cryonicists admit that we can’t know for sure that medical science will become as all-powerful as they hope, but a relatively small financial investment in cryonics will at least buy you a shot at immortality, whereas spending your spare money on a nicer car or a bigger house promises only certain death. Bauge did not know for sure that he could save his grandfather, but he thought he had a chance. In 1989 the only cryonics facilities were in the US. So he arranged for his grandfather to be flown across the Atlantic, in a steel casket packed with dry ice. Here he was transferred to one of the early cryonics companies, Trans Time in the San Francisco Bay Area, and immersed in liquid nitrogen at -196°C (-320°F), a temperature at which the natural processes of decay and putrefaction come to a halt. Bauge considered this a mere stopover; he had grander plans for rescuing his grandpa. The young Norwegian’s dream was to found his own cryonics facility, one that could survive whatever perils the future might hold. No one could say how long it would be before the technology would be invented that could repair and reanimate his grandpa, so Bauge had to ensure he was safe until the time came. Having explored many options, he settled for Colorado and the Rocky Mountains, mostly because their inland location would permit a generous 30-minute warning if a nuclear attack was launched from submarines off either of America’s coastlines — he had no idea that the Cold War was coming to an end just as he was finalising his plans. He bought a plot of land above the little town of Nederland, a few miles southwest of — and 3,000ft above — the city of Boulder, with spectacular views and a climate not unlike his native Norway. There he started building. Bauge was then and remains, at the age of 55, a visionary. Like most visionaries, his ambition inhabits a middle space between the prophetic and the pathological. On the one hand, his dream of a day when we will conquer death is rooted in the very real medical and scientific progress of previous centuries; on the other hand, his single-handed struggle with the Reaper feels like an inability to accept brute reality. Exactly the same dichotomy permeates the cryonics movement. Its advocates argue using data and logic, yet their practices are broadly perceived as cultish and macabre. Cryonicists consider the rest of us to be deluded, walking blindly towards death, whereas the rest of us see them as fantasists, a little disturbed and a little disturbing, clinging to the corpses of their loved ones like Catholic peasants to a saint’s severed finger. One group or the other must have it badly wrong. The question is, which? Bauge rigorously followed the logic of death-defiance. The main building he constructed was fireproof, bulletproof and designed to survive earthquakes and mudslides. Nothing would shift it from its outcrop on the windy mountainside. The structure was even designed to withstand nuclear attack (until Bauge decided to put in windows). Form was entirely sacrificed to function, creating a dull grey concrete block with peculiar angles, like something made by a clumsy toddler. In September 1993, Bauge deemed his facility, if not finished, at least habitable, and he and his mother moved in. But he hadn’t yet built the cutting-edge cryonic storage chambers, so grandpa required temporary digs. These were the early days of cryonics and arrangements were makeshift. The young man quickly threw up a shed behind the main house, where Morstøl’s steel casket could be entombed in dry ice. The following year he even took on a new client, the recently deceased Al Campbell from Chicago, who joined grandpa Morstøl in the ice box. It seemed that both the idea and the practice of cryonics were making progress. His ageing mother was now alone in the bunker, without mains electricity or running water, halfway up a snowy mountain with the corpses of her father and a stranger in her shed In fact, the idea of cryopreserving humans has been around for a few centuries. Mary Shelley wrote a short story about it called ‘Roger Dodsworth: The Reanimated Englishman’ in 1826. Yet the technology required to attempt it has only been available for a few decades. It was only when Bauge was growing up, in the post-war boom years of the 1950s and ’60s when the future seemed so exciting and so imminent, that cryonics was able to establish itself as a philosophy and as a movement. Its prophet was the American physicist Robert Ettinger. His book The Prospect of Immortality (1964) spelt out how ‘you and I, right now, have a chance to avoid permanent death’. In 1976 he went on to found the non-profit Cryonics Institute in Michigan, whose first frozen client was Ettinger’s own mother. The institute now safeguards 112 preserved human ‘patients’ accompanied by 91 pets. More than 500 paid-up members have reserved their spots in the deep-freeze, while the institute has also been joined by numerous similar organisations in the US and beyond. Alas, soon after Bauge took on his second cryonic client, his plans for a facility of his own experienced a serious setback: he was deported. Although he had been in the US for 14 years, he had neither a visa nor a Green Card. Indeed, he rejected the very idea of either document on principle, believing them violations of the basic right to freedom of movement. The US immigration service was unmoved by his appeal to ideology, and put him on a plane back to Oslo. His ageing mother was now alone in the bunker, without mains electricity or running water, halfway up a snowy mountain with the corpses of her father and a stranger in her shed. When the Nederland newspaper Mountain-Ear reported on Bauge’s deportation, his unhappy mother lamented that she didn’t know what would now happen to the bodies. The bodies! Suddenly the local reporter’s story had become a lot more interesting. Police and other town officials were soon examining the suspicious bunker with its cadavers in the cooler for evidence of ill deeds. Even after it was established that Bauge and family were not mass murderers, voices of disgust and outrage dominated the town council. Realising there were currently no legal grounds on which to stop makeshift cryonics centres, the town board passed a law prohibiting the storage of dead bodies on private property. Al Campbell’s family reclaimed his corpse, leaving Grandpa Morstøl alone in his icy tomb under threat of eviction. Cryonics facilities in other states in the US had been through similar wrangles: Ettinger’s Cryonics Institute is legally designated as a cemetery. This is perhaps unsurprising, as the patients within are decidedly dead by all current legal standards. They have to be, as the processes of cryopreservation would kill anyone as surely as a bullet to the heart. Deep-freezing a human body stops all circulation and brain activity and causes substantial tissue damage. The technology does not currently exist to revive such a body or repair such damage. Cryopreservation can therefore only be done to the legally dead, or it would be murder. For its advocates, this shows how completely society fails to understand what cryonics is trying to do. Central to the movement is the belief that many of those we bury are not really dead at all. Cryonicists point out that the technological advances of recent decades have forced us to redefine death. Once, death happened when the heart stopped beating, but then we learnt how to restart a heart. It is therefore now mostly defined as brain death. But we might also learn to restart a brain. In fact, this possibility is built into the definition: brain death is understood to be the irreversible end of brain activity. Of course, the question of what is or is not reversible is at least in part a function of our technology. What is permanent today might one day be curable. One leading cryonics advocate, Aschwin de Wolf, puts it to me like this: ‘If the original state of the brain can be inferred and restored, cryonics patients are not dead … Their identities are still with us in an information-theoretical sense.’ Someone whose brain has stopped working will ordinarily be buried or burned, or they will have their organs harvested. But de Wolf, who edits Cryonics magazine and conducts research on neural cryobiology, believes that a person cannot be considered utterly gone so long as his brain has not yet turned to mush. If his neural networks are still intact, even if no longer active, then there is a sense in which he is still with us. We are therefore burying those who, in his view, are not yet dead. If advocates such as Bauge and de Wolf could succeed in shifting the definition of death to apply only when the information stored in the brain is physically destroyed, the status of cryonics would change instantly. Instead of an eccentric form of burial, it would be a treatment. Those whose brains were still whole but no longer working would not be considered corpses, but would remain patients — and the only available means to stabilise their condition would be the deep freeze and a long wait for a future cure. Indeed, if we accept the so-called ‘information-theoretic’ definition of death, then cryonics becomes a moral imperative. The hearts and minds of millions of people every year cease to function due to age and disease, yet for a few hours the structure of their brains remains largely intact. At that point, further degeneration could be halted by cryopreservation, opening up at least the possibility of future rehabilitation. Ettinger, the founder of cryonics, believed it was only a matter of time before the courts made freezing compulsory. Half a century on we still permit millions every year to rot, condemning them to eternal destruction. But not Bredo Morstøl. His daughter, alone after Bauge’s deportation, was not about to concede defeat to local bureaucrats. She roused town residents to support her in opposing this illiberal intervention in her family affairs, winning considerable support in a region known for its sympathy for the radical individualist. The council conceded a ‘grandfather clause’: although the keeping of corpses was generally forbidden, bodies stored on private property prior to the ordinance were allowed to remain. Grandpa Morstøl was, in legal parlance, grandfathered in. ‘The will to live is the very essence of life,’ Bauge told me on the phone from his new home in Oslo. ‘If this is thwarted, we cannot be happy, we cannot be anything.’ The precondition for all ethics, for all action of any kind, is first and foremost that we stay alive. ‘This is the highest goal for any individual,’ he said. Bauge believes we are morally obliged to take responsibility for our own flourishing and to continue to flourish for as long as possible. He has a clear hierarchy of possibilities for how this continuation might be achieved. Toward the bottom of the list is having children, which preserves a mere 50 per cent of one’s genetic material. Of course, one could have lots of children, so that more of one’s DNA was passed on, but in any given child it would be diluted and shuffled so that none of them would preserve a person’s distinct genetic individuality. Trygve Bauge: ‘A day spent ice bathing and in the sauna is a day when you do not age’. Photo by Morten Holm / ScanpixThat could, however, be achieved by cloning, which is next on Bauge’s list as it would preserve 100 per cent of a person’s genes. This is the fallback option planned for his grandfather. In contrast to the Cryonics Institute, Bauge admits that his deep-frozen patient might just be dead. ‘Though it’s not black and white,’ he added; ‘to be clearly dead you have to be warm and dead — if someone is frozen, we can’t say for sure if they’re irrecoverably gone’. Still, he concedes that the freezing of his grandfather was ‘a bit of an experiment’, and not conducted in ideal conditions. The degree of deterioration and the amount of damage incurred by preservation make a full re-animation of the once virile outdoorsman ambitious to say the least. But keeping him on ice will at least preserve his genetic material so that it can be cloned, once again to manifest in manly form. Bauge envisages a world in which we are engineered from birth to be freezable, so that if the worst should happen, we can immediately be supercooled to maximise our chances of successful repair Yet Bauge concedes that this is still not real survival: cloning will not preserve someone’s memories; their personality as shaped not just by genes but also experience; their unique consciousness. This is what death claims when it eats away your brain. Unless, of course, you can be properly cryopreserved. Bauge believes this will soon be possible. He sets his hopes not only on improving the freezing techniques, but also on improving us humans. Some creatures, such as tardigrades (also known as water bears), tiny water-dwelling creatures of extraordinary hardiness, can already survive being frozen. We must learn their genetic secrets and re-engineer our own DNA so we can perform the same trick. Bauge envisages a world in which we are engineered from birth to be freezable, so that if the worst should happen — being hit by a bus, for example — we can immediately be supercooled to maximise our chances of successful repair. But top of Bauge’s list as the best option of all is simply to stay alive. One might think that those who have a backup plan (he himself is signed up with the Cryonics Institute founded by Ettinger) would be less worried about a piano falling on their heads than those of us without a Plan B. But in fact the reverse is the case: those signing up for cryopreservation services are also trying very hard to ensure they never need them. De Wolf told me that ‘like most people with cryonics arrangements’ he has ‘a strong interest in life extension and rejuvenation research’. So he tries to eat healthily, exercise and avoid stress. Bauge follows a similar regime with a diet involving a large amount of bean sprouts (‘They are young organisms,’ he explained, ‘and therefore have vitamins and hormones that an ageing body cannot produce.’) And he stays fit through a hobby entirely in keeping with his interest in cryonics: ice bathing. ‘I intend to be frozen and so I’m preparing for it!’ he joked. He claims to have set the world record for ice bathing in open water (as opposed to in a tub of ice, which he argues is much easier) by staying in for one hour and four minutes, and he is proud of having founded a number of ice-bathing clubs around the world. ‘A day spent ice bathing and in the sauna,’ he said, ‘is a day when you do not age’. Bauge’s plans to stay alive are not limited to diet and exercise: they extend to major engineering projects of which his Colorado bunker is only a modest prototype. He is currently designing medical parks with research, rejuvenation and cryonics facilities that would be protected against the existential threats that loom over our species, though he has yet to find backers. He points out that Norway has many miles of road tunnels under high mountains. If his parks were built in one of these, they would be impervious to pandemics, pollution, nuclear war and even meteor strikes. It is in such time capsules that far-sighted survivors such as Bauge could sleep out Armageddon and one day awaken to a new dawn. This mix of apocalyptic and utopian thinking characterises the extreme life-extension movement in all its manifestations — and also betrays its true roots. Cryonics is a kind of eschatology, a vision of the End Time, in which earth-shattering events will precede the transcendence of mortality for the chosen few. This is a myth with a millennia-long history. First will come an End Time of great tribulation and final battles, but this is only a prelude to the creation of paradise on earth Central to such views is the belief that we are living in a time uniquely important in world history; a time when great forces clash with the potential for both utter destruction and a new beginning. Faith in cryonics does not depend on a coming apocalypse, but its advocates tend to be deeply concerned about one. For Bauge, who grew up during the Cold War, nuclear Armageddon remains the catastrophe of choice. For younger cryonicists, it might be climate change or a new epidemic, or — for the true digital native — the technological singularity, the moment when we create an artificial super-intelligence that might or might not be benign. The promise of cryonics is that those who succeed in surviving or averting this catastrophe will find themselves transformed. In their new world, science will have solved the problem of death. The frozen bodies will all be thawed, their diseases cured. Ettinger described the reanimated man thus: After awakening, he may already be again young and virile, having been rejuvenated while unconscious … In any case, he will have the physique of a Charles Atlas if he wants it, and his weary and faded wife, if she chooses, may rival Miss Universe. Much more important, they will be gradually improved in mentality and personality … the future will reveal a wonderful world indeed, a vista to excite the mind and thrill the heart.Those familiar with the Christian New Testament will recognise this vision immediately. First will come an End Time of great tribulation and final battles, but this is only a prelude to the creation of paradise on earth. This paradise will be enjoyed by the faithful dead, reanimated and restored. St Paul described the resurrected body thus: ‘It is sown in corruption; it is raised in incorruption: it is sown in dishonour; it is raised in glory: it is sown in weakness; it is raised in power … For this corruptible must put on incorruption, and this mortal must put on immortality.’ (1 Corinthians 15) The common vision is one of transcendence and transformation, the common themes those of fear and hope. This vision reflects our deep desire to leave behind the parlous state of mortality, in which some fatal incident might occur any day and one day surely will. We do not cope well with such existential uncertainty, and so we are more than willing to accept stories of how, very soon, everything will be settled and the problem of death will be solved once and for all. Is it therefore mad for Trygve Bauge to believe his grandpa might live again? Or for Aschwin de Wolf to expect that Robert Ettinger and his mother will one day rise from their icy tombs? If it is, then it is a kind of madness to which most humans in history have succumbed. It is no madder than the belief that one Jesus of Nazareth spent three days dead only to be raised and once again walk among his followers. Nor is it madder than those who for thousands of years worshipped Osiris in the belief he could reawaken the fallen, or those who underwent the rites of the mystery cults that promised an exit pass for the underworld. But it is not much less mad either. Cryonics wears a cloak of science and reason, but at its heart is faith. Consider for a moment its assumptions: that technological progress will continue unabated or even accelerate; that all diseases are curable and ageing itself reversible; that the people of the distant future would choose to reawaken all these frozen corpses; that the world could then sustain a population boosted by these reanimated ancestors. These views are themselves not hypotheses from within science, but part of the ideology that supports it: the belief in progress, the transcendence of nature and the possibility of utopia. This is a belief that the Enlightenment project of science owes to apocalyptic religion. When cryonicists and other techno-utopians argue that their claims are entirely within the Enlightenment tradition, they are therefore correct. These claims really are only extensions of the pervasive belief that we can use science and reason to become masters of our destiny. After all, science and reason really have brought with them progress: in the developed world, we do live longer and more comfortably than most of our forebears. This is what explains the paradox of cryonics, the strange way that it appears at the same time so rational and so outlandish. Its adherents can argue very reasonably because their premises of technological progress and increasing prosperity are those on which our whole society is based. Yet when we extrapolate from these premises as the cryonicists do, they are revealed for what they are: articles of faith — expressions of hope, not logic. The answer to the question of whether cryonics is reasonable or fantastical is therefore: both. Meanwhile, grandpa Morstøl still waits in his steel tomb, though not entirely alone. Bauge’s mother long ago returned to Oslo, leaving the monthly job of topping up the dry ice to a contractor, now known locally as ‘the Iceman’. And in 2001 the town of Nederland fully embraced its unique part in the history of cryonics, launching an annual festival: the Frozen Dead Guy Days. It continues to flourish, now in its eleventh year; a kind of Halloween Mardi Gras in which Grandpa Morstøl-lookalike competitions vie for attention with cryonics workshops and a hearse parade. But the biggest draw is the sightseeing trip to Bauge’s bunker and his — now somewhat revamped — cryopreservation shed, where Morstøl awaits his resurrection. The people of Nederland seem to revel in the contradiction frozen in their midst. They mock cryonics while marking it, and honour Bauge and his grandpa both as heroes and as fools. They make a carnival of mortality, celebrating the paradox that our belief that we can thwart death is both entirely reasonable and entirely ridiculous. | Stephen Cave | https://aeon.co//essays/is-it-rational-to-think-we-can-cheat-death-with-cryonics | |
Ecology and environmental sciences | Environmental activists have been jailed, persecuted and spied on. What fuels such an absolute commitment to a cause? | One day last summer, a young woman looked down on a small crowd of vocal supporters and police officers from her hammock or ‘sky pod’, 60ft above an old logging road in Moshannon State Forest in Pennsylvania. The pod was tied to trees and anchored to a blockade across the road, so that anyone trying to move the blockade would release her in a dangerous, perhaps fatal, fall to the forest floor. Another activist on the ground had locked his neck to one of the lines anchoring her pod. It was a familiar sight from protests against the logging of old-growth forests, but here the target was different. Workers who arrived for their shift that Sunday morning could not get past the blockades to attend to a 70ft hydraulic fracturing drill rig used to extract natural gas from the rock formations beneath the forest floor. ‘You’re adults, but you’re acting like children,’ shouted one of the officers. They had been called to the scene by EQT, the natural gas company that had leased mineral rights to the gas-rich Marcellus Shale that lies beneath a large portion of several northeastern states, including Pennsylvania, Ohio and New York. ‘We are peaceful protesters,’ responded one of the activists. Other officers stood by with assault rifles, waiting to see what would happen. Later that day, a basket crane removed two tree sitters from the blockade, and three people were arrested for disorderly conduct. Nearly 100 protesters and supporters associated with the 33-year-old radical environmentalist group Earth First! shut down the drilling site for 12 hours. According to Earth First!’s website, this was the first shutdown of an active hydrofracking site in the United States. With the rise of fracking, protest techniques that were developed in the endangered redwood groves and old-growth forests of the west coast of the US have been brought to the heart of the continent. As with their opposition to the logging of old-growth forests, radical environmentalists here put their lives — and their bodies — in the line of fire in order to protect the well-being of forests and waterways. Law enforcement agencies, news media, local communities and other environmentalists in the US have mixed reactions to acts of civil disobedience and sabotage. The government’s Federal Bureau of Investigation (FBI) has made radical environmentalists a priority: putting them under surveillance and sending undercover agents to disrupt activist communities. In 2005, John Lewis, an FBI deputy assistant director and top official in charge of domestic terrorism, declared to CNN news that ‘The number-one domestic terrorism threat is the eco-terrorism, animal-rights movement.’ In the past 10 years, environmental and animal rights activists have been aggressively pursued by the FBI, according to the Washington journalist Will Potter in his book Green Is the New Red (2011). The personal consequences for activists can be devastating. Jeff ‘Free’ Luers, a renowned eco-activist, was a Pagan teenager who talked to trees. When hanging around with his Pagan friends, he recalls: ‘We saw the underlying spirit in things. I became very in tune with the energy around me … the hardest part about being a Pagan is overcoming all you have been taught. I mean, people think I’m crazy when I say I can talk to some trees … And yet it is totally acceptable to talk and pray to a totally invisible god.’ Luers’s early relationships with trees helped draw him into protests against logging, excessive consumption and global warming. At the height of his activism, in the early morning hours of 16 June 2000, Luers and his friend, Craig ‘Critter’ Marshall, crept up to a car dealership in Eugene, Oregon. After checking to make sure no people were in the area, they started a fire that inflicted $40,000-worth of damage to three Chevrolet SUVs. Later that morning, they were arrested. Luers was convicted of 11 felony counts, which included arson and attempted arson, and was sentenced to 22 years and eight months in prison. After much legal wrangling, his sentence was overturned, reduced to 10 years. He was released in 2010, with his idealism remarkably intact. I asked Momma Earth what it felt like to have humanity forget so much, and attack her every day like a cancer If government agencies regard eco-activists as a dangerous threat to business and social order, supporters praise them as warriors and heroes who are risking their lives or serving prison sentences for a just and noble cause. Radical activists believe they are freedom fighters at the forefront of a revolution in the relationship between human beings and other species, bestowing on trees and non-human animals the kind of inherent rights usually reserved for humans. Eco-activists tend to reverse accepted definitions of violence. They see property destruction as an act of love, and socially accepted activities such as building a ski resort or selling cars as violent acts against nature. I have been talking and corresponding with eco-activists over the past five years in order to better understand what motivates them. In a letter written to me from prison, Rod Coronado, a well-known Native American eco-activist who served eight months in jail for sabotaging a mountain lion hunt, conveys his sense of loss: I’ve seen some of the last great whales slaughtered in Iceland, herds of pilot whales butchered … Old-growth forests being chainsawed, even though there are less than five per cent left, pretty much the worst crimes of violence against nature I have witnessed. I have acted out of rage, sorrow and love, I have held signs, written letters, signed petitions, sabotaged machinery, even burned down buildings. Out of rage and sorrow, desperation and urgency, environmentalists choose direct action, sometimes crossing the line to sabotage heavy machinery or set fire to buildings that symbolise for them global warming, pollution, habitat destruction, and mass extinction. Those outside eco-activist communities, such as the police officers called to the anti-fracking demonstration in Pennsylvania, cannot fathom why anyone would risk injury or death for such a cause. Plenty of Americans believe nature is worth preserving, but not at the expense of human interests. But some young Americans — and most direct-action activists are young — continue to join radical environmental groups and participate in acts of civil disobedience, even at the risk of long prison sentences for actions that include sabotage. Why do these activists feel compelled to commit dangerous and often illegal actions at such risk? How do such extreme commitments come about? Although Luers has been depicted as an ‘eco-terrorist leader’ in the news media and was put on special watch in prison for being politically dangerous, in a statement read at his sentencing in 2001, he insisted that his actions were not violent but were born out of love: ‘It cannot be said that I’m unfeeling or uncaring. My heart is filled with love and compassion. I fight to protect life, all life, not to take it.’ Luers’s determination to ‘fight to protect life’ dates back to an earlier experience. In 1998, at the age of 19, he travelled to the Willamette National Forest in Oregon to participate in a campaign to save an old-growth forest of Douglas fir, Western hemlock, and red cedar. For him, these trees were sacred and divine. ‘Standing before them is a humbling experience,’ he explained, ‘like standing before a god or goddess.’ Each time a chain saw cut through those trees, I felt it cut through me as well. It was like watching my family being killed In ‘How I Became an Eco-Warrior’, written from prison in 2003, he describes joining the Willamette Forest tree-sit. He climbed into a tree he called ‘Happy’ and watched helplessly as nearby trees fell, while activists tried to save the ones they could and delay the loggers’ progress. He recalls sitting back against the tree and meditating: I felt the roots of Happy like they were my own. I breathed the air like it was a part of me. I felt connected to everything around me. I reached out to Momma Earth and I felt her take my hand … I asked Her what it felt like to have humanity forget so much, and attack her every day like a cancer. At that moment, Luers began sweating profusely as the boundary between his body and the tree’s body dissolved, and the Earth’s grief became his grief: I felt the most severe pain all over, spasms wracked my body. Tears ran down my face. I could feel every factory dumping toxins into the air, water, and land. I could feel every strip-mine, every clear-cut, every toxic dump and nuclear waste site. This experience marked his conversion to radical activism: The feeling only lasted a second, but it will stay with me for the rest of my life. My life changed that day. I made a vow to give my life to the struggle for freedom and liberation, for all life, human, animal, and earth. It is common for eco-activists to personify plants and animals as brothers and sisters, and mourn fallen trees as beloved family members. They exemplify a love for other species that the American biologist Edward O Wilson has called ‘biophilia’, an idea popularised in his 1984 book of the same name and described as an innate ‘urge to affiliate with other forms of life’. The anthropologist Kay Milton, professor at Queen’s University, Belfast, studies the ecology of emotions and argues that naming trees and animals is a way to personify nature, invoking an impulse to protect those special beings. Some individual trees have become almost as famous as their protectors: Julia Butterfly Hill became a media sensation and eco-heroine when she spent two years between 1997 and 1999 on a platform in a 1,500-year-old redwood tree named Luna 180ft above the floor of the Headwaters Forest in northern California. As trees fell around Luna, Hill — like Luers — felt the destruction in her own body. In an essay entitled ‘Committed Love in Action’ (2002), she wrote: Each time a chain saw cut through those trees, I felt it cut through me as well. It was like watching my family being killed. And just as we lose a part of ourselves with the passing of a family member or friends, so did I lose a part of myself with each fallen tree. When news outlets circulated images of Hill hugging Luna, the tree that was otherwise anonymous in a distant forest appeared in a shared, public space as a person with rights and feelings When activists empathise with other species, they experience what the American biologist Donna Haraway has described as the bond of ‘companion species’. In her book When Species Meet (2008), Haraway quotes the anthropologist Anna Tsing, challenging us to re-evaluate our relationships with other beings, whether dogs or mushrooms: ‘Human nature is an interspecies relationship,’ she insists, because life consists of ‘knots’ of species ‘co-shaping each other’. Activists such as Hill who sleep and dream in trees for long periods of time without coming down, ‘sharing cells’ as Haraway describes it, discover that their lives are inextricably connected with the lives of other organisms. For some activists, this sense of intimacy and reciprocity becomes a growing awareness of a sacred presence in nature. Although activists from Christian, Jewish and other religious backgrounds participate in protests too, most US activists have rejected organised religion, or follow Pagan, Native American and other ‘Earth-centred’ traditions. They see monotheistic traditions as separating humans from nature, echoing the famous critique by the American historian Lynn White Jr, ‘The Historical Roots of Our Ecological Crisis’ (1967), in which he blames Christianity and its influence on Western science for our assumption of dominion over other species. Activists reject the Christian belief of their parents that the divine is outside the natural world, and instead locate it within the world. Christopher ‘Dirt’ McIntosh, who set fire to a McDonald’s restaurant in Seattle in 2003 on behalf of extremist environmental groups, for which he was sentenced to eight years in prison, described to me the awe he felt towards the world as a divine body during the many days he spent outdoors as a child and teenager. He explicitly distances himself from ‘the Jews, Muslims, Christians [who] see their God as an entity upon some throne in the heavens, instead I see the Earth Mother in all things … Her “Body” making up everything in this infinite universe … so the only real law I follow is treating all things natural [as] sacred.’ While in prison, it appears that McIntosh disavowed his commitment to the radical environmentalist movement, yet he continues to stand by his early connection to Earth’s body. Childhood memories are bittersweet for many activists Influenced by the work of the Swiss developmental psychologist Jean Piaget in the 1920s, we might expect the kind of animistic thinking that characterises the childhood stories of activists and their ongoing personifications of nature in adulthood to give way to a ‘mature’ view of a disenchanted world. Piaget borrowed the concept of animism from the English anthropologist Edward Burnett Tylor, who developed the late-19th-century cultural theory about the origin of religion, in which animism is seen as a primitive stage of human development. But Pagan teenagers such as Luers and McIntosh who carried their love of the Earth and trees as sacred beings into their actions in later years, reject this theory. As McIntosh sees it, most of his peers became increasingly disconnected from nature as they grew up: their connections to the world of ‘magic’ sever little by little — they may and do encounter nature, but they don’t any longer feel interconnected with it and it becomes abstract to them and their alienation keeps them from seeing it all as they did when they were children. Although much critiqued and debated, Piaget’s and related views still have a hold over many of us, views that were clearly expressed by the officer who scolded activists for ‘acting like children’. Perhaps they are, indeed, acting like children — children, that is, who have not lost a precious connection to nature. When eco-activists climb into platforms hundreds of feet up in redwood canopies and occupy fracking sites, they draw on memories of childhood places and experiences, as well as on skills and strategies learnt in action camps and workshops. Ritualised actions such as creating sacred spaces at forest action camps, sitting in trees with nooses around their necks, and chaining themselves to blockades to prevent logging, shape and reinforce these activists’ memories of past relationships to nature. Childhood memories are bittersweet for many activists: on the one hand they remember nature as a special place that they explored and in which they developed close relationships to trees or animals. On the other, many recall the disturbed places of childhood memory. The first relationship is one of love and attachment, the second of grief and loss. If a child’s sense of self is extended into a landscape made sacred, then the loss of that landscape or its radical transformation into a clear-cut slope or a parking lot is a cause for grief. If particular trees in the landscape have been named, have become friends, then the loss is even greater. When activists explain what drove them to break the law, they often describe the destruction of remembered places or their shock at seeing a housing development where there was once a field. In 2006 a jury in New Jersey found Joshua Harper and six others guilty of using their website to incite attacks on those who did business with Huntington Life Sciences, Europe’s largest animal-testing lab in Cambridge, UK. Harper was sentenced to three years for ‘conspiracy to violate the Animal Enterprise Protection Act’ for his role in the international animal rights organisation SHAC (Stop Huntington Animal Cruelty). Harper told me that when he was nine, his family moved from San Diego, California to Eugene, Oregon. He explained: Seeing the juxtaposition between a sprawling hideous southern California city and a small tree-filled Oregon city was shocking to me… As I got older some of the places I had fallen in love with began getting paved over, clear-cut, or polluted. That was all the motivation I needed to adopt a militant outlook on the need for wilderness defence or more appropriately, offence. Harper’s experience of seeing the landscapes of childhood — which once centred and gave meaning to his world — violated by development is a common theme in radical environmentalists’ accounts of their conversion to activism. Maia Oldham, who became involved with Earth First! after graduating from high school, recalls a similar childhood experience of familiar nature destroyed. She explained to me that she grew up in a small town in southern California where she spent most of her free time making water holes for animals in the desert. During high school, Oldham became troubled by what she saw happening: more and more tract homes being built in the open spaces she had loved as a child, and where she had called wild animals her friends. So one night while she was still in school, Oldham and her boyfriend crept into a construction site for new homes. The teen saboteurs pulled up survey stakes and put corn syrup into the gas tanks of bulldozers on the site, in order to delay the destruction. Their notion of ‘home’ included the forest and its creatures, as well as the community of like-minded others sharing the site with them Beloved landscapes and memories are ever-present in the stories activists tell about how they converted to environmentalism. In these accounts, the activists imagine both rupture and continuity with the past. Psychologists such as the Harvard professor Daniel Schacter have shown that memories are not a video replay of the past, but shaped by the contours of the present context in which we are doing the remembering. Tree-sits and action camps in the woods, during which activists touch tree bark for hours, sleep under the stars, and listen to the winged creatures and mammals of the forest conjure up particular images and relationships from the past. This past becomes alive with the memories of other times when the smell and feel of trees and earth were vivid and valuable to the remembered childhood self. At protests, this remembered eco-child from the activists’ pasts is invited to participate in the present. At the same time, the present illuminates the past in particular ways, shining light on those scenes in which the ecological child is front and centre. In activists’ accounts of their childhoods, the child’s emerging sense of self is shaped by relationships with other species. The American poet Gary Snyder observes in his book of essays, The Practice of the Wild (1990), that: ‘The childhood landscape is learned on foot, and a map is inscribed in the mind — trails and pathways and groves — going out farther and wider.’ So the activists’ maps of the world, extending out from the self, are not just human-centred, although human friends and family can also feature. Rod Coronado traces the roots of his activism to camping and fishing trips with his family that took him far away from the Californian suburbs of San Jose where he grew up. According to his biographer, Dean Kuipers, childhood trips to the woods moulded Coronado’s ‘lifelong identification of self with nature’. Children actively shape the landscape as they become intimate with it, modelling reciprocal relationships they will revisit later in their lives. A week before the anti-fracking protest, I was driving down meandering roads in northeastern Pennsylvania, where depressed rural communities are divided over how to respond to the economic opportunities that they believe fracking might bring them. In these struggling communities, concerns about watershed pollution and suspicion of outside corporations sit uncomfortably with the desire for economic self-sufficiency. ‘Home’ read the banner that greeted me at the entrance to Earth First!’s Round River Rendezvous, a weeklong gathering of activists at a hard-to-find site deep in the Allegheny National Forest, 140 miles northeast of Pittsburgh. As I joined activists from around the US sharing strategies, learning climbing techniques and studying legal advice, it became clear that their notion of ‘home’ included the forest and its creatures, as well as the community of like-minded others sharing the site with them. Workshops on a wide range of topics at the Rendezvous showed a multipronged concern with injustice and violence: ‘Fighting Male Supremacy’, ‘Female-Identified, Queer and Trans Listening Circles’, ‘Challenging Racism in Our Movements’, ‘Edible Plants’, ‘Medical Misogyny in the Catholic Church’, ‘Mountaintop Removal’. Over the course of the week in the forest, activists created a summer camp atmosphere with winding paths through the woods, colourful banners draped over their tents, songs and music around campfires. The primitive, makeshift gathering site with its carefully planned temporary latrines, ropes strung in the branches for a climbing area and community kitchen lent an aura of playfulness to the serious business of organising the anti-fracking protest. Tree-climbing, a puppet show, a talent show and other activities were a constant reminder of the strange juxtaposition of scenes from a childhood campout and such disturbing adult topics as rape and ecological destruction. As the Rendezvous drew to a close, and a caravan of cars drove slowly away from our temporary home in the forest towards the anti-fracking protest, I thought of the irony of the young activists’ lives. They travel from one protest to another, leading a homeless existence in constant tension with the deep emotional connections to place of their childhood memories, only to re-create the intimacy, for a time, at the Rendezvous. For them, this temporary home was a utopian vision of a future that we are far from achieving any time soon, a vision fuelled by their sense of loss and grief. But tree-sits and other forms of eco-activism ask us to take seriously this vision of a world that childhood memory offers us, a world in which humans and other species might live at home together, united by a sense of kinship and community. | Sarah Pike | https://aeon.co//essays/eco-activists-speak-about-their-conversion-experiences | |
History of ideas | He is the dramatic thunderstorm at the heart of philosophy and his provocation is more valuable than ever | I fell for Søren Kierkegaard as a teenager, and he has accompanied me on my intellectual travels ever since, not so much side by side as always a few steps ahead or lurking out of sight just behind me. Perhaps that’s because he does not mix well with the other companions I’ve kept. I studied in the Anglo-American analytic tradition of philosophy, where the literary flourishes and wilful paradoxes of continental existentialists are viewed with anything from suspicion to outright disdain. In Paris, Roland Barthes might have proclaimed the death of the author, but in London the philosopher had been lifeless for years, as anonymous as possible so that the arguments could speak for themselves. Discovering that your childhood idols are now virtually ancient is usually a disturbing reminder of your own mortality. But for me, realising that 5th May 2013 marks the 200th anniversary of Søren Kierkegaard’s birth was more of a reminder of his immortality. It’s a strange word to use for a thinker who lived with a presentiment of his own death and didn’t reach his 43rd birthday. Kierkegaard was the master of irony and paradox before both became debased by careless overuse. He was an existentialist a century before Jean-Paul Sarte, more rigorously post-modern than postmodernism, and a theist whose attacks on religion bit far deeper than many of those of today’s new atheists. Kierkegaard is not so much a thinker for our time but a timeless thinker, whose work is pertinent for all ages yet destined to be fully attuned to none. It’s easy enough to see why I fell in love with Kierkegaard. Before years of academic training does its work of desiccation, young men and women are drawn to philosophy and the humanities by the excitement of ideas and new horizons of understanding. This youthful zeal, however, is often slapped down by mature sobriety. I remember dipping into the tiny philosophy section of my school library, for example, and finding Stephan Körner’s 1955 Pelican introduction to Kant. I couldn’t make head nor tail of it. Strangely, this did not put me off philosophy, the idea of which remained more alluring than the little bit of reality I had encountered. Kierkegaard was not so much an oasis in this desert as a dramatic, torrential thunderstorm at the heart of it. Discovering him as a 17-year-old suddenly made philosophy and religion human and exciting, not arid and abstract. In part that’s because he was a complex personality with a tumultuous biography. Even his name emanates romantic darkness. ‘Søren’ is the Danish version of the Latin severus, meaning ‘severe’, ‘serious’ or ‘strict’, while ‘Kierkegaard’ means churchyard, with its traditional associations of the graveyard. He knew intense love, and was engaged to Regine Olsen, whom he describes in his journals as ‘sovereign queen of my heart’. Yet in 1841, after four years of courtship, he called the engagement off, apparently because he did not believe he could give the marriage the commitment it deserved. He took love, God and philosophy so seriously that he did not see how he could allow himself all three. He was a romantic iconoclast, who lived fast and died young, but on a rollercoaster of words and ideas rather than sex and booze. During the 1840s, books poured from his pen. In 1843 alone, he published three masterpieces, Either/Or, Fear and Trembling, and Repetition. Kierkegaard achieved the necessary condition of any great romantic intellectual figure, which is rejection by his own time and society All of this, however, was under the shadow of a deep melancholy. Five of his seven siblings died, three in the space of the same two years that claimed his mother. These tragedies fuelled the bleak religiosity of his father, who believed he had been punished for cursing God on a Jutland heath for His apparent indifference to the hard, wretched life of the young sheep farmer. When his father told Søren about this, it seems that the son adopted the curse, along with his father’s youthful sins. Yet alongside this melancholy was a mischievous, satirical wit. Kierkegaard was a scathing critic of the Denmark of his time, and he paid the price when in 1846 The Corsair, a satirical paper, launched a series of character attacks on him, ridiculing his gait (he had a badly curved spine) and his rasping voice. Kierkegaard achieved the necessary condition of any great romantic intellectual figure, which is rejection by his own time and society. His biographer, Walter Lowrie, goes so far as to suggest that he was single-handedly responsible for the decline of Søren as a popular first name. Such was the ridicule cast upon him that Danish parents would tell their children ‘don’t be a Søren’. Today, Sorensen — son of Søren — is still the eighth most common surname in Denmark, while as a first name Søren itself doesn’t even make the top 50. It is as though Britain were full of Johnsons but no Johns. All this was more than enough to draw my open but largely empty 17-year-old mind to him. In the battle for intellectual affections, how could the likes of A J Ayer’s Language, Truth and Logic (1936) or Willard Van Orman Quine’s Word and Object (1960) compete with Kierkegaard’s The Sickness Unto Death (1849) or Stages on Life’s Way (1845)? What is more interesting, however, is why the intellectual affair lasted even as I became a (hopefully) less impressionable, older atheist. If Kierkegaard is your benchmark, then you judge any philosophy not just on the basis of how cogent its arguments are, but on whether it speaks to the fundamental needs of human beings trying to make sense of the world. Philosophy prides itself on challenging all assumptions but, oddly enough, in the 20th century it forgot to question why it asked the questions it did. Problems were simply inherited from previous generations and treated as puzzles to be solved. Kierkegaard is inoculation against such empty scholasticism. As he put it in his journal in 1835: What would be the use of discovering so-called objective truth, of working through all the systems of philosophy and of being able, if required, to review them all and show up the inconsistencies within each system … what good would it do me if truth stood before me, cold and naked, not caring whether I recognised her or not, and producing in me a shudder of fear rather than a trusting devotion?When, for example, I became fascinated by the philosophical problem of personal identity, I also became dismayed by the unwillingness or inability of many writers on the subject to address the question of just why the problem should concern us at all. Rather than being an existential problem, it often became simply a logical or metaphysical one, a technical exercise in specifying the necessary and sufficient conditions for identifying one person as the same object at two different points in time. So even as I worked on a PhD on the subject, located within the Anglo-American analytic tradition, I sneaked Kierkegaard in through the back door. For me, Kierkegaard defined the problem more clearly than anyone else. Human beings are caught, he said, between two modes or ‘spheres’ of existence. The ‘aesthetic’ is the world of immediacy, of here and now. The ‘ethical’ is the transcendent, eternal world. We can’t live in both, but neither fulfils all our needs since ‘the self is composed of infinitude and finitude’, a perhaps hyperbolic way of saying that we exist across time, in the past and future, but we are also inescapably trapped in the present moment. The limitations of the ‘ethical’ are perhaps most obvious to the modern mind. The life of eternity is just an illusion, for we are all-too mortal, flesh-and-blood creatures. To believe we belong there is to live in denial of our animality. So the world has increasingly embraced the ‘aesthetic’. But this fails to satisfy us, too. If the moment is all we have, then all we can do is pursue pleasurable moments, ones that dissolve as swiftly as they appear, leaving us always running on empty, grasping at fleeting experiences that pass. The materialistic world offers innumerable opportunities for instant gratification without enduring satisfaction and so life becomes a series of diversions. No wonder there is still so much vague spiritual yearning in the West: people long for the ethical but cannot see beyond the aesthetic. In evocative aphorisms, Kierkegaard captured this sense of being lost, whichever world we choose: ‘Infinitude’s despair is to lack finitude, finitude’s despair is to lack infinitude.’ Kierkegaard thus defined what I take to be the central puzzle of human existence: how to live in such a way that does justice both to our aesthetic and our ethical natures. Kierkegaard showed that taking religion seriously is compatible with being against religion in almost all its actual forms His solution to this paradox was to embrace it — too eagerly in my view. He thought that the figure of Christ — a man-made God, wholly finite and wholly infinite at the same time — was the only way to make sense of the human condition, not because it explains away life’s central paradox but because it embodies it. To become a Christian requires a ‘leap of faith’ without the safety net of reason or evidence. Kierkegaard’s greatest illustration of this is his retelling of the story of Abraham and Isaac in Fear and Trembling (1843). Abraham is often held up as a paradigm of faith because he trusted God so much he was prepared to sacrifice his only son on his command. Kierkegaard makes us realise that Abraham acted on faith not because he obeyed a difficult order but because lifting the knife over his son defied all morality and reason. No reasonable man would have done what Abraham did. If this was a test, then surely the way to pass was to show God that you would not commit murder on command, even if that risked inviting divine wrath. If you heard God’s voice commanding you to kill, surely it would be more rational to conclude you were insane or tricked by demons than it would to follow the order. So when Abraham took his leap of faith, he took leave of reason and morality. How insipid the modern version of faith appears in comparison. Religious apologists today might mumble about the power of faith and the limits of reason, yet they are the first to protest when it is suggested that faith and reason might be in tension. Far from seeing religious faith as a special, bold kind of trust, religious apologists are now more likely to see atheism as requiring as much faith as religion. Kierkegaard saw clearly that that faith is not a kind of epistemic Polyfilla that closes the small cracks left by reason, but a mad leap across a chasm devoid of all reason. That is not because Kierkegaard was guilty of an anarchic irrationalism or relativistic subjectivism. It is only because he was so rigorous with his application of reason that he was able to push it to its limits. He went beyond reason only when reason could go no further, leaving logic behind only when logic refused to go on. In a pluralist world, there is no hope of understanding people who live according to different values if we only judge them from the outside This was powerful stuff for a teenager such as me who was losing his religious belief. What Kierkegaard showed was that the only serious alternative to atheism or agnosticism was not what generally passes for religion but a much deeper commitment that left ordinary standards of proof and evidence completely behind. Perhaps that’s why so many of Kierkegaard’s present-day admirers are atheists. He was a Christian who nonetheless despised ‘Christendom’. To be a Christian was to stake one’s life on the absurdity of the risen Christ, to commit to an ethical standard no human can reach. This is a constant and in some ways hopeless effort at perpetually becoming what you can never fully be. Nothing could be more different from the conventional view of what being a Christian means: being born and baptised into a religion, dutifully going to Church and partaking in the sacraments. Institutionalised Christianity is an oxymoron, given that the Jesus of the Gospels spent so much time criticising the clerics of his day and never established any alternative structures. Kierkegaard showed that taking religion seriously is compatible with being against religion in almost all its actual forms, something that present-day atheists and believers should note. Kierkegaard would undoubtedly have been both amused and appalled at what passes for debate about religion today. He would see how both sides move in herds, adhering to a collectively formed opinion, unwilling to depart from the local consensus. Too many Christians defend what happens to pass for Christianity in the culture at the time, when they should be far more sceptical that their churches really represent the teachings of their founder. Too many atheists are just as guilty of rallying around totems such as Charles Darwin and the scientific method, as though these were the pillars of the secular outlook rather than merely the current foci of its attention. Kierkegaard’s views on religion are not the only way in which his critique of ‘the present age’ is strangely timely for us, and likely to be the same for future readers. ‘Our age is essentially one of understanding and reflection, without passion, momentarily bursting into enthusiasm,’ he wrote in 1846, ‘and shrewdly lapsing into repose.’ Passion in this sense is about bringing one’s whole self to what one does, including reasoning. What is much more common today is either a sentimental subjectivity, in which everything becomes about your own feelings or personal story; or a detached objectivity in which the motivations and interests of the researchers are deemed irrelevant. Kierkegaard insisted on going beyond this objective/subjective choice, recognising that honest intellectual work requires a sincere attempt to see things as they are and an authentic recognition of how one’s own nature, beliefs and biases inevitably shape one’s perceptions. This central insight is nowhere more developed than in his pseudonymous works. Many of Kierkegaard’s most important books do not bear his name. Concluding Unscientific Postcript (1846) is written by Johannes Climacus; Fear and Trembling (1843) by Johannes de Silentio; Repetition (1843) by Constantin Constantius; while Either/Or (1843) is edited by Victor Eremita. This is not just some ludic, post-modern jape. What Kierkegaard understood clearly was that there is no neutral ‘objective’ point of view from which alternative ways of living and understanding the world can be judged. Rather, you need to get inside a philosophy to really see its attractions and limitations. So, for example, to see why the everyday ‘aesthetic’ life is not enough to satisfy us, you need to see how unsatisfying it is for those who live it. That’s why Kierkegaard writes from the point of view of people who live for the moment to show how empty that leaves them. Likewise, if you want to understand the impossibility of living on the eternal plane in finite human life, see the world from the point of view of someone trying to live the ethical life. This approach makes many of Kierkegaard’s books genuine pleasures to read, as literary as they are philosophical. More importantly, the pseudonymous method enables Kierkegaard to achieve a remarkable synthesis of objectivity and subjectivity. We see how things are from a subjective point of view, and because they really are that way, a form of objectivity is achieved. This is a lesson that our present age needs to learn again. The most complete, objective point of view is not one that is abstracted from the subjective: it is one that incorporates as many subjective points of view as are relevant and needed. This also provides the link between imagination and rationality. A detached reason that cannot enter into the viewpoints of others cannot be fully objective because it cannot access whole areas of the real world of human experience. Kierkegaard taught me the importance of attending to the internal logic of positions, not just how they stand up to outside scrutiny. This is arguably even more vital today than it was in Kierkegaard’s time. In a pluralist world, there is no hope of understanding people who live according to different values if we only judge them from the outside, from what we imagine to be an objective point of view but is really one infused with our own subjectivity. Atheists need to know what it really means to be religious, not simply to run through arguments against the existence of God that are not the bedrock of belief anyway. No one can hope to understand emerging nations such as China, India or Brazil unless they try to see how the world looks from inside those countries. But perhaps Kierkegaard’s most provocative message is that both work on the self and on understanding the world requires your whole being and cannot be just a compartmentalised, academic pursuit. His life and work both have a deep ethical seriousness, as well as plenty of playful, ironic elements. This has been lost today, where it seems we are afraid of taking ourselves too seriously. For Kierkegaard, irony was the means by which we could engage in serious self-examination without hubris or arrogance: ‘what doubt is to science, irony is to personal life’. Today, irony is a way of avoiding serious self-examination by believing one is above such things, a form of superiority masquerading as modesty. It might be spotty, angst-filled adolescents who are most attracted to the young Kierkegaard, but it’s us, the supposed adults, who need the 200-year-old version more than ever. | Julian Baggini | https://aeon.co//essays/happy-birthday-kierkegaard-we-need-you-now | |
Philosophy of mind | Evangelical Christians in California tried to ban yoga in schools. So where is the line between the body and the soul? | I started practising yoga 12 years ago at a newly opened studio in San Francisco called the Yoga Tree. One day, I was coming out of a back bend — ustrasana, or camel pose, to be exact — when my bodymind abruptly and briefly fluttered into a tingling otherworld of uncanny and dizzying bliss. After the class, I asked the teacher about the experience, curious about how he’d parse my trippy little altered state. ‘Probably low blood pressure,’ he said. ‘Coming out of backbends can restrict your blood flow. You might want to watch that.’ I paused. ‘This wasn’t just a head rush. It was like, uh … have you ever had a big balloon of nitrous oxide?’ ‘Ah,’ he said, and launched into telling me about the nadis. ‘These are channels in the body that carry prana,’ he explained, referring to Hinduism’s version of élan vital. ‘They are the cause of a lot of openings.’ At the time I was struck by the fluid ease with which my teacher switched from Western physiology to Eastern esotericism. It spoke to how we postmoderns have grown comfortable shifting between different, even contradictory world views. But it also said something about the contemporary world of postural yoga, and how it has come to bridge (and occasionally tunnel between) the sacred and the secular. Sometimes yoga practitioners set these two frames of reference alongside one another, as my teacher did; other times they superimpose them onto various squirrely frameworks of ‘spiritual science’. For some, asana is unquestionably prayer; for others, it just beats the gym. Yoga’s special trick is to elude these apparent contradictions by inviting folks to shut up, get on the mat, and follow the flow. ‘Yoga is 99 per cent practice, and one per cent theory,’ proclaimed the late Indian yoga master Sri Krishna Pattabhi Jois, who founded the vigorous style of yoga known as Ashtanga, in Mysore, in 1948.‘Practise, and all is coming.’ One thing that Jois probably did not see coming, however, was the conscription of yoga into America’s culture wars. As I write, a public school district in California is being sued by Christian parents and a conservative legal watchdog group for teaching yoga to children aged six to 11 as part of their physical education programme in elementary school. The suit argues that the programme is ‘inherently and pervasively religious’ and, as such, that it violates the state’s religious freedom clauses. The mediating ambiguity of yoga’s ‘sacred science’ is being forced through the binary, yes-or-no code of a legal system charged with safeguarding the US’s constitutional separation between church and state. They complained about the ostracism of children who opted out of the programme — a situation one fool compared to Nazi Germany Jois would have been particularly struck by the location of the battle. Encinitas, a small beach community north of San Diego, was the inaugural American home of Ashtanga in the mid-1970s. Developed from the teachings of Tirumalai Krishnamacharya, the fountainhead of modern Hatha yoga, Jois’s rigorous Ashtanga school invites practitioners to submit to an unwavering sequence of gnarly poses and taxing transition moves. Though there is little overt discussion of Hindu philosophy in typical Ashtanga studios, the form itself engenders and radiates an unmistakable quality of spiritual discipline and meditative focus. Ashtangis practise most mornings in nearly silent rooms, progressing through the sequence of asanas at an individual pace, while often developing a quality of sober devotion that, while not always easy to distinguish from Type-A obsession, attests to the psychological as well as physically transformative effects of the regimen. One person whose life and body were transformed by Ashtanga is the Australian model Sonia Jones, wife of the American multi-billionaire hedge-fund manager Paul Tudor Jones II. Following the death of Jois, Sonia — to the grumbling of some long-term practitioners — entered into a partnership with Jois’s heir-apparent, his grandson Sharath. What emerged is ‘Jois Yoga’, a codification of the Ashtanga brand, accompanied by a new line of yoga wear. Jones and the Jois family have also established a handful of slick Jois Yoga shalas, or studios, around the US — including a new location in Encinitas that some old hands see as a slap in the face to the old-school Ashtanga studio that had been in town since the 1970s. The Joneses meanwhile established the non-profit K P Jois Foundation, which has already provided millions to set up the Contemplative Sciences Center at the University of Virginia in Charlottesville. The foundation also funnelled $533,000 into setting up twice-weekly, 30-minute yoga programmes for elementary schools in the Encinitas Union School District (EUSD). But the programme stuck in the craw of some parents and in October a small number of them — evangelical Christians, backed by the conservative National Center for Law and Policy (NCLP) — came before the school board. They complained about the use of the Buddhist mandala symbol in art class, the introduction of physical poses ‘imparted by Hindu deities’, and the ostracism of children who opted out of the programme — a situation one fool compared to Nazi Germany. The school board, puzzled by this religious interpretation of stretching exercises, and no doubt enjoying the bounty of external funding, refused to cave in. So in February, the NCLP filed a suit against the district on behalf of one pair of parents, Stephen and Jennifer Sedlock. That’s not how these court battles usually go — even in California, whose New Age-friendly coast is pocketed with mega-churches and fiercely conservative communities, especially in the interior of the state. When the issue of religion and public schools comes up, it’s generally because evangelical activists are trying to slip religious messages into public school, with atheists and freethinking parents invoking the First Amendment. In Encinitas, the tables are turned: now the Biblical conservatives are thumping on the very same secular cornerstone they are more usually trying to slip around. As the journalist Katherine Stewart pointed out in a sharp post on Religion Dispatches, the head attorney at NCLP acting for the Sedlocks — Dean Broyles — is affiliated with a powerful right-wing legal organisation called the Alliance Defending Freedom. The ADF litigates on behalf of evangelical activity in public schools, which includes abstinence programs, ‘character education’ curricula, and after-school bible study groups for elementary pupils, called Good News Clubs. Stewart, who has written a book about the use of public schools to advance a fundamentalist Christian agenda, noted that all of the schools in the Encinitas district already host Good News Clubs, which gives you a taste of how much religion — Christian religion that is, or at least Christian moralising — already exists in or around public schooling. The Encinitas case is a different kettle of loaves and fishes. Rather than combating secularism, activist Christians are now indirectly taking on another religion, a religion that, they argue, is disguised as secular physical culture, and is, in their terms at least, false. This battle is not just taking place in the courtroom: it is a war of the religious imagination. In January, National Public Radio covered the controversy. Its report featured Mary Eady, an Encinitas parent, convinced that Hindu religious goals were being inculcated alongside the stretching moves. Eady complained that immediately before performing the sun salutes that so often open yoga routines, pupils were told ‘to thank the sun for their lives and the warmth that it brought’. Most parents (and readers) would barely register this innocuous story-book sentiment. But Eady, deploying the sort of paranoid hermeneutics that fire evangelical worries against product labels or ‘backwards-masked’ messages in rock music, believes that children were being told to worship the sun. Moreover, Eady suggested that behind the program lurked a shadowy, hedge-fund-backed foundation whose founders believe in the spiritual benefits of Ashtanga yoga. But let us reverse the mirror, and ask what sort of religion lurks behind Eady’s complaints, and what shadowy religious organisations might stand behind her concerns that children are being lured into sun worship? As the yoga writer Carol Horton noted in a blog from February, Eady happens to be a project manager at truthXchange, an evangelical organisation whose raison d’etre is to halt the spread of ‘global paganism’. The organisation’s worldview, to judge from their website, is blunt and Manichaean: the choice we face is between the idolatrous worship of the universe itself (‘One-ism’) or the proper worship of the creator outside the universe (‘Two-ism’). One important corollary to this formula is that both the ‘pagan’ (and presumably ‘Hindu’) worship of natural forces and science’s insistence on a purely material cosmos are two sides of the same hell-bent coin. As such, yoga’s already precarious mediation between sacred and secular, body and mind, simply makes the practice even more suspicious. Even goals such as ‘wellbeing’ and ‘stress management’ can become red flags. ‘It’s stated in the curriculum that [yoga is] meant to shape the way that they view the world,’ Eady told NPR. ‘It’s meant to shape the way that they regulate their emotions and the way that they view themselves.’ In the face of such overripe suspicions, the school board, along with representatives from the K P Jois Foundation, have insisted that the yoga program is a purely athletic regimen shorn of religious elements. The majority of press reactions and op-eds adopt a similar line, scoffing at evangelical worries about touching your toes. In a superficially secular society like the US, officially beholden to science, these reactions make sense. Most casual yoga practitioners would be taken aback by the accusation that participation in mindful stretching routines makes them acolytes of an exotic faith. Ironically, the judge in the case turned out to be one such practitioner. Surprising the courtroom by announcing last month that he had recently begun practising Bikram (‘sweaty’) yoga, Judge John Meyer noted that: ‘If you think there’s something spiritual about what I do, that’s news to me.’ American asana practice has been oscillating between the ashram and the gym, sometimes clothed in exotic veils and other times in low-cut spandex Many yogis would understand the judge’s puzzlement, since Bikram yoga, with its mirror-lined studios and trademarked sequence, is often cast as the epitome of profane and commercialised yoga. For countless practitioners, modern yoga is purely this-worldly, at most lending a holistic sheen to the consumer quest for a better butt. But yoga can be, and quite often is, something more. Many teachers — far too many, I feel — lace their chat with New Age nostrums, and the décor in lots of studios suggests a vaguely exotic, mystic ambience. But yoga’s spiritual juice does not ultimately lie in coffee-table books explaining Hindu philosophy, or the statues of the great Lord Ganesha by the door. As Horton and other yoga bloggers who are wrestling with this case have come to admit — and in contrast to the EUSD’s argument that the children are just stretching — deep experience of the discipline does tend to transcend physical culture, though often in ways that are difficult to articulate. On this point, the conservatives running the NCLP might be ready to agree. The specific legal question, however, is whether the Jois curriculum’s particular mix was ‘religious’. To bolster their case, the NCLP recruited the Harvard-educated academic Candy Gunther Brown, an associate professor of religion at the University of Indiana. Though Brown herself studies evangelicals and healing prayer, her 37-page brief for the NCLP rightly recognises that today’s postural yoga evolved out of a complex mixture of medieval Hatha yoga, modern Hindu revival movements, British physical culture, and Western metaphysical traditions — the same hydra-headed current of occult lore, Theosophy, and self-help philosophies that birthed the ‘New Age’. Brown’s error lies in unreflexively labelling this decidedly modern mélange ‘religion’ — a Latin Christian term inextricably linked to Christianity’s self-image as the dominant global system of creed, sacrament, and congregation. It is not clear how adequately the term ‘religion’ covers the classic Hindu world, let alone the mutant modern offspring of one limb of that dizzying tradition. What if modern postural yoga is neither religious nor secular, but something in between, or something beyond, something whose evident appeal partly lies in that very liminality? As Stefanie Syman shows in The Subtle Body (2010), her history of yoga in the US, American asana practice has been oscillating for more than a century between ashram and gym, sometimes clothed in exotic veils and other times in low-cut spandex. It is this very oscillation — a flux incarnated for many practitioners every time they hit the mat — that ‘is’ yoga. Oscillations are tough to read: now you see it, now you don’t. For her part, Professor Brown indulges in plenty of feverish literalism, describing the sun salute as ‘consistent with religious worship’, and characterising the integration of breath into physical movement as a kind of gateway drug that ‘prepares one to unite with the Universal in Samadhi’. (If only it were so easy!) Other arguments that initially look like Christian conspiracy theory, however, carry more than a few grains of truth. Brown notes that, in much apparently ‘secularised’ yoga, novices first enjoy the physical benefits of the workouts and then begin to receive ‘spiritual nuggets’ from teachers, nuggets that lead deeper into the Hindu worldview. Setting aside the nefarious implications Brown intends, some version of this process occurs all the time. Most yoga teachers are quite comfortable presenting the practice in layers of increasing esoterica. Eventually, at least in my experience, the physical bandhas (contractions) flower into imaginal chakras — and head-rushes get reframed as nadi shudders. But what’s interesting about Brown’s argument — in a sense the key to the Christian evangelists’ fear of yoga — goes beyond religious discourse entirely. Brown claims that the Encinitas yoga curriculum advances Hindu and American metaphysical religion ‘whether or not these practices are taught using religious or Hindu language’. In other words, the spiritual power — and threat — does not lie within the discourse packaging the moves, but in the moves themselves. Though I suspect it’s wrong, I love this idea. I love it because it inspires the fantasy that somewhere, somehow, some stressed-out car dealer or soccer mom is going to take a yoga class at a gym (maybe because the instructor is cute) and then, halfway through practice — maybe while stretching in pigeon pose, or teetering on elbows in crow — the serpent kundalini unwinds her tail, and a fountain of pranic bliss shoots up his or her spine and blooms into a third-eye-opening shudder of wind-chimes and astral rose petals. Brown means something a little more down to earth, of course. She claims that, in contrast to Protestant concerns with the word, Eastern religions express devotion directly through practices that fuse body and mind. The physical practices drawn from those traditions can never be stripped of religion, because the religion — others would say ‘spirituality’ — already lies in the embodiment. This is a deeply conservative view of religious meaning, with little appreciation for how modern practitioners change these embodied meanings, to say nothing of the interventions of MRI scanning and other tools quantifying the psycho-physiological effects of yoga and meditation. But Brown’s view does paradoxically, and ironically, accord with leading fantasies that Western seekers hold about their yoga practice, which romantically contend that today’s postural sequences stretch back continuously through time immemorial, carrying the transmission of the ancient sages. Fortunately, the spiritual efficacy of yoga goes beyond the purview of the courts, whether or not that power is interpreted as summum bonum or a demonic lure. But as far as Brown’s critique of the Encinitas curriculum itself goes, I have to say that, as a devotee of the First Amendment, I share some of her concerns. The devil is in the detail, and if Brown’s description is accurate, then the K P Jois Foundation and its school district partners could have done a better job of stripping spiritual language and religious glyphs — Chinese t’ai chi, the yin/yang symbol, or a chart of Patanjali’s Eight-Fold Path — from their materials. That said, I don’t believe the foundation was trying to smuggle ‘Hindu philosophy’ through the back door, let alone the sun god Surya. They didn’t think their materials would piss people off, because holistic language and ‘Eastern’ imagery has become the norm as people embrace a wide range of healing modalities, and more and more of us identify as ‘spiritual but not religious’. The real issue is that modern yoga and meditation do create real psychobiological changes, and that those changes are coming to be seen as no longer intrinsically religious or spiritual. As the religious historian Catherine Albanese, argues, one way that the American metaphysical tradition succeeded was by simply dissolving into the culture at large. The mind-over-matter religion of Christian Science, for example, flows directly into corporate self-help seminars. Even the plaintiff in the EUSD case, Jennifer Sedlock, is tainted by this stuff: a Christian motivational speaker, she is also a ‘qualified Myers-Briggs consultant’ — a psychological typology system directly based on the ideas of the deeply esoteric (and rather anti-Christian) Carl Jung. Pop metaphysics has become the air we breathe, whether or not we try to synch it up with downward dog. | Erik Davis | https://aeon.co//essays/are-you-flexible-enough-to-worship-the-sun | |
Subcultures | Leaving LA? Forget the scenic route along the coast: hit the Five instead and see what’s on California’s mind | The most beautiful way to drive between Los Angeles and the Bay Area is generally reckoned to be along Highway 1: the Pacific Coast Highway, the first State Scenic Highway (so declared in 1964), which clings to the coastline all the way up to an intricately winding section above the glorious and terrifying cliffs of Big Sur. Henry Miller lived up there for years; his book Big Sur and the Oranges of Hieronymus Bosch (1957) was an homage to the peace and contentment he seemed surprised to have found. ‘It may indeed be the highest wisdom,’ he wrote, ‘to elect to be a nobody in a relative paradise such as this rather than a celebrity in a world which has lost all sense of values.’ Striking words coming from a megalomaniac like Miller, though the Crazy Cock certainly succumbed to the lure of the values-free world often enough after writing this pleasant little book. In any case, that route takes around nine hours, quite a lot of which is indeed like driving through a series of ever-more-improbably ravishing postcards — that is, if you’re not too chicken to tear your eyes away from the road. The second way, still partly along the coast, is far less magnificent, less scary, somewhat less beautiful, and quicker: the 101 through Santa Barbara, Paso Robles and San Jose. You’ll cut an hour or two off your drive that way, unless we take into account the siren song of what many consider to be the best Mexican restaurant in California — La Super-Rica Taqueria on North Milpas Street in Santa Barbara, a temptation to which I recommend you yield. The most efficient drive, however, is bang up the middle of the state through the Central Valley — up a steep grade through the Tejon Pass, precipitously down the stretch known as ‘The Grapevine’ north of LA, and then across hundreds of miles of mostly flat agricultural land. A six-hour drive. More like five if you’re rash enough to lead-foot it, but that’s very risky, since the fuzz can be tough to see coming. Most perilous are the Commercial Vehicle Enforcement section of the California Highway Patrol: these predators cruise around in all-white ‘snowballs’ rather than the more conspicuous black-and-white squad cars, and though they’re charged mostly with busting errant truckers, any of those guys will nail you and revoke your licence as soon as look at you. Most of my circle are all too susceptible to hot yoga, the ‘shredding’ diet, the paleo diet, probiotics, Pilates, flax oil and all the other local fads This is the way I always go. Not only because it’s fastest, but because, despite the conventional wisdom, I find it the most beautiful of all. California natives always call it just ‘The Five’: not Highway Five or Interstate Five or I-5. The whole road is some 1,380 miles long, and is the only US interstate touching the borders of both Mexico and Canada. If, like me, you prefer to drive rather than to fly between Los Angeles and the Bay Area, the 400 or so miles along this route are the most distinctively Californian, the most revealing of the strange diversity of this landscape and its people, its terrible moral conflicts, and the weird vitality of its countless subcultures. The Five is unendingly rich and full of interest, despite all caricatures to the contrary; despite, or rather because of its glowing, gorgeous emptiness. The maelstrom of urban life being what it is, when I’m at home in LA I tend to go about my business in a wild-eyed, constant panic, my hair pretty much standing on end. So I had a go at meditation a few months ago, one of those vaguely ‘spiritual’ things that people often try around here. I would like to be able to snort with derision at the cliché of ‘La-La land’, but how can I? Most of my circle — sober, professional adults d’un certain age though we might be — are all too susceptible to hot yoga, the ‘shredding’ diet, the paleo diet, probiotics, Pilates, flax oil and all the other local fads. The basic instructions for beginning meditation are to sit quietly and concentrate on nothing but your breathing for five minutes. Simple enough, right? I can’t do it for ten seconds. I can turn down the lights, burn all the incense and play all the soothing music I want but, after the briefest pause, my brain will recommence to whirr, instantly, uncontrollably. Until I get on that blissfully empty stretch of open road, that is. Then the car becomes a meditation chamber. It all happens by itself. Breathing slows, the benevolent sky swells out, almost always a blue so pure, clean and enamelled that even worries of climatic catastrophe recede for a moment. Maybe there are some clouds, artfully arranged. Choose your moment to leave town — I like to leave at around 5am, just before rush hour — and there won’t even be any traffic to speak of. Just the white noise of the purring engine to amplify the calm, blissful silence, which will at last find its way into even the most stubbornly busy mind. Dropping into the Central Valley from the mountains surrounding the Tejon Pass is like breaking open a petit four, getting past the glossy, pretty exterior: inside is the cake. The urban surfaces of California are what we see in movies and on TV: slick, manufactured, shouting, cajoling, bamboozling, seducing, ready to sell you something. And then the confected beauty of the city gives way; now the land reaches far out to the sky. Your ears pop from the pressure change, and a sign advises you that the next gas station is 19 miles off. Things you won’t see along the Five: shopping malls, schools, multiplexes, groovy restaurants, cafés or bars; movie posters, LED billboards, skyscrapers, banks, strolling couples, tourists taking photographs, movie crews, or really anything whatsoever to do with Hollywood; skateboarders, palm trees, the sea. Things you will see: endless fields, neatly ploughed, or burgeoning into a tender green; almond orchards, immense truck stops, gas stations with fast-food restaurants clustered alongside; the odd Starbucks; a constant stream of container trucks; runaway truck ramps; loads of tattered signs complaining about the (not at all evident) ‘Congress Created Dust Bowl’, many of them blaming the congresswoman Nancy Pelosi, the Democratic House Minority Leader (a clue as to their political origins — Pelosi is second in odiousness only to the Antichrist Barack Obama in the wild eyes of the far right). And then the largest cattle ranch on the West Coast. Harris Ranch operates an immense feedlot just east of the city of Coalinga. It announces itself first by the unbelievably pungent smell of manure, cow pee, methane and feed, or whatever hellacious brew it is that scorches your nostrils and lungs miles before its source rolls into view: thousands upon thousands of cows — anywhere between 60,000 and 120,000 of them — milling about in close quarters with not a blade of grass in sight. A mammalian ocean, a city of doomed beasts looking for all the world as if they’re headed for the slaughterhouse, which they are. Here is the inconvenient truth, the source of those tidy, plastic-wrapped packages of marbled red meat at Costco and, perhaps, in your refrigerator. Even the most dedicated carnivore, such as my husband, is likely to be taken aback at the sight and smell of the Harris Ranch feedlot, which produces in excess of 150 million lbs (68 million kg) of beef per year. And yet a very little research reveals that Harris Ranch, large as it is, is exactly the kind of family farm that many are in favour of protecting. No less an authority than Temple Grandin, the noted animal scientist, spoke in favour of Harris Ranch at the California State University in Chico last year. The reporter Larry Miller quoted Grandin in the local paper: ‘Harris Ranch “does a great job” with its animals, [Grandin] said. “Harris needs to give more tours and explain its practices to the public.”’ In fact, the cattle at Harris spend 80 per cent of their lives grazing on grass. They are brought to the feedlot only to be fattened on what they describe as ‘a balanced ration of quality feed grains, hay, vitamins and minerals’ in their final months. Butchering is done in-house and the meat is tested for antibiotic and bacterial residues throughout processing. The operation’s management appears mindful of sustainable, humane and hygienic practices. If we are to eat beef, Harris seems to be producing it responsibly. That doesn’t stop many Californians from shuddering at the sight of it. One commonly hears it referred to as ‘Cowschwitz’. From the road, the sight of a limitless horde of cows standing in the mud really does suggest a concentration camp. The fact being, of course, that we mean to eat those cows. In January of last year, animal-rights activists incinerated 14 cattle trucks at Harris in protest against factory farming. Hackles went up on all sides. The activists sent a message claiming responsibility [sic throughout]: we’re not delusional enough to believe that this action will shut down the harris feeding company, let alone have any effect on factory farming as a whole. but we maintain that this type of action still has worth, if not solely for the participant’s peace of mind, then to show that despite guards, a constant worker presence, and razorwire fence, the enemy is still vulnerable.In a characteristic and illuminating discussion of the incident on the Democratic Underground message boards, one poster wrote: ‘I see absolutely no purpose in setting a fire to make a political statement. It will not change the corporate structure of our food supply. much like OWS [Occupy Wall Street] will not change the structure of our financial institution owned government.’ Another commented: ‘I eat beef, I raised beef, I sale beef. It creates jobs, food, income, and a way of life that’s been a part of this world for centuries. Back then, if you were a veggie, you probably wouldn’t make it. It’s called, “living off the land.”’ It’s easy to ignore such conflicts in the cocoon of abstractions that is city life, but as your meditation machine speeds through Coalinga at the Highway Patrol-safe speed of 79mph, they are brought pungently into focus. Is it the smell of death, the smell of excrement, or ‘an honest, American smell’ as one wayfarer put it? Yes, yes and yes. Ernest Becker, author of The Denial of Death (1973), would have been fascinated by it. Those drivers along the Five who recoil in horror from any aspect of the Harris Ranch story might stop for lunch, if they like, at the popular and beloved In-N-Out burger chain, which is always mobbed, whether in town or on the highway. The high quality and freshness of In-N-Out’s ingredients is a big selling point for them, and they do indeed turn out a fine burger. All the meat for In-N-Out is supplied by Harris. Such ruminations (sorry, cows!) are not available on your postcard tour of a city such as the manicured Carmel-by-the-sea on Highway 1. Anyone who’s seen Chinatown (1974) knows a little about California’s water politics; there is an eternal tug-of-war between agriculture and business, environmental concerns and the requirements of ever-growing, thirsty desert cities such as Los Angeles. Here is another area where Angelenos such as me often fail to grasp the complex compromises to which we owe our own prosperity and comfort. The Central Valley is one of the richest agricultural regions in the world. In a mind-blowing piece for The New York Times in October last year, Mark Bittman wrote that the Central Valley is the Earth’s largest patch of Class 1 soil, and it’s the nation’s breadbasket — supplying, for example, 85 per cent of the carrots that Americans eat every year. This rich and fertile land has been exploited ruthlessly for decades. Lakes and rivers have disappeared, diverted to aqueducts to serve the needs of agriculture. Sustainability has consistently taken a back seat to profits. As Mark Arax, a Fresno writer, explained to Bittman: ‘This land and its water have gone mostly to the proposition of making a few men very wealthy and consigning generations of others, especially farmworkers, to lives in the dust.’ The familiar signs along the Five, such as ‘Stop the Congress Created Dust Bowl’, ‘No Water: No Jobs’, ‘Stop Pelosi Costa Boxer’ are arranged in rows, like the old roadside Burma-Shave signs. Again, there’s more to the story than meets the eye. Air 1 describes itself as ‘The Positive Alternative’, and though the DJs and songs are in general relentlessly cheerful, it’s a total downer in other, subtle ways In fact, the richness and fertility of the Central Valley are the direct result of massive government irrigation projects. Without government intervention, central California would be what nature intended: a desert. This is apparently lost on the many, many soi-disant ‘small government’ Central Valley Republicans demanding that the government bail them out. Or, rather, it is washed away with huge amounts of water diverted from the Sacramento-San Joaquin River Delta. Agriculture uses more than 80 per cent of California’s water, and there has never really been enough to go around. Political tensions invariably rise in proportion to the slightest drop in rainfall, and climate change promises sharper fluctuations in years to come. The 2010 midterm elections saw the Republican hopefuls Carly Fiorina (former CEO of Hewlett Packard) and Meg Whitman (former CEO of eBay) campaigning on water issues in the Central Valley. Nevertheless, Whitman, the gubernatorial nominee, got a drubbing at the hands of the former Democratic governor Jerry Brown. (The rains were very good in 2010, after a long drought.) Now Governor Brown is attempting to resuscitate a project he failed to pass in his first term as governor a few decades ago. The $23 billion successor to the doomed Peripheral Canal project would divert gigantic amounts of water from the Sacramento Delta to the Central Valley. Northern California residents came out in droves to block the project in 1982, and there is every chance they will do so again. However, in November last year, the San Joaquin County Board of Supervisors approved a list of water projects compiled over six years by a consortium of representatives from 12 counties, north and south. This document made no mention of the retooled Peripheral Canal. The approved projects include water-quality barriers, the improvement of existing levees, fish screens at export pumps, and so on. Even a casual traveller through California, breezing through its many interdependent ecosystems, landscapes and constituencies, can appreciate this attempt to devise complex solutions for complex problems. The radio on the Five, too, is complicatedly compelling. When I’m driving around at home, I mainly stick with KCRW, the best public radio station I know. But on the road, I have the time and inclination to explore, and it’s a teeming world out there. I’ve heard a Spanish-language DJ playing the Kinks and discussing the music of the spheres, as well as sermons, opera, a surprisingly frank Christian panel show on the topic of marital contentment, and my favourite discovery, the lively and weirdly fascinating Christian rock station, Air 1. Air 1 describes itself as ‘The Positive Alternative’, and though the DJs and songs are in general relentlessly cheerful, it’s a total downer in other, subtle ways. Many — I venture to say most — of the songs they play are about being lost, depressed, alone, until ‘You’ arrive: ‘You’ being cleverly written to serve as well for a lover as for the Son of Man, as in the boppy, poppy ‘I’m Alive’ by Peter Furler: ‘When I was lost in a maze of doubt/You called my name and woke me up/You called my name and led me out. ’ The future seemed to belong, as perhaps it should, to entirely new places, new people, new ways of looking at things I first heard the Mexican-American brother and sister duo Jesse & Joy on Univision’s El Hit Parade de América, which features music from the whole Hispanophone world. It’s a sophisticated, eclectic playlist, and best of all, I have heard almost none of these songs before. Now I sometimes listen to the station at home. ‘¿Con Quién Se Queda El Perro?’ is an elegant rock song with clever, delicate lyrics on the subject of a breakup that somehow, regrettably, just has to be. (The song’s title means ‘Who gets the dog?’ or more literally ‘With whom will the dog stay?’) When I first listened, I thought I detected in Joy’s gorgeous voice a very slight, bewitching American accent (the duo’s mother is American). Their altogether fresh sound seemed to embody for me the emergence of a newly dynamic, cosmopolitan world, including a California nobody expects, just coming into being; synthesising its disparate parts into a powerful new whole. It seemed fitting to consider these things, not in LA where I live, but somewhere on the Five near Fresno. The future seemed to belong, as perhaps it should, to entirely new places, new people, new ways of looking at things. And then I drive, and drive, and meditate. Stop just once more for gas. Finally, I leave the Five, turn onto the 580 West and toward the sea, toward Livermore and Dublin, and eventually Oakland and San Francisco. Hoping that the effects of meditation will persist, even when I return to the world of multiplexes, shopping malls, wall-to-wall culture, traffic, and slip back into our ordinary world, our familiar world of panicked obliviousness. | Maria Bustillos | https://aeon.co//essays/driving-the-five-is-a-meditation-on-the-state-of-california | |
Food and drink | Intrigued by the buzz around medical fasting, I tried it. A rollercoaster of boredom and energy ensued | It all began in March last year when I read an article by Steve Hendricks in Harper’s magazine titled ‘Starving Your Way to Vigour’. Hendricks examined the health benefits of fasting, including long-term reduced seizure activity in epileptics, lowered blood pressure in hypertensives, better toleration of chemotherapy in cancer patients, and, of course, weight loss. He also mentioned significantly increased longevity in rats that are made to fast. Most interesting was his tale of undertaking a 20-day fast himself, during which he shed more than 20 pounds and kept it off for the two years since. I was fascinated, and I started reading more about fasting afterwards, although at the time I had no intention of doing it myself. The benefits of fasting have been much in the news again lately, in part due to a best-selling book from the UK that is also making waves in the US: The Fast Diet: Lose Weight, Stay Healthy, Live Longer (2013) by Dr Michael Mosley and Mimi Spencer. Mosley is a BBC health and science journalist who extols the benefits of ‘intermittent fasting’. There are many versions of this type of fasting that are currently the subject of various research programmes, but Mosley settled on the 5:2 ratio — in every week, two days of fasting, and five days of normal eating. Even on the fasting days, one may eat small amounts: 600 calories maximum for men, 500 for women, so about a quarter of a normal day’s intake. Mosley’s claim is that such a ‘feast or famine’ regime closely matches the food consumption patterns of pre-modern societies, and our bodies are designed to optimise such eating. Drawing on various research projects studying intermittent fasting and weight loss, cholesterol levels and so on, he argues that even after quite short periods of fasting, our bodies turn off fat-storing mechanisms and switch to a fat-burning ‘repair-and-recover’ mode. Mosley says that he himself lost 20lbs in nine weeks on the diet, bringing his percentage of body fat from 28 to 20 per cent. He says his blood glucose went from ‘diabetic to normal’, and that his cholesterol levels also declined from levels that needed medication to normal. He also says that he feels much more energetic since. Inspired by Mosley and Hendricks, I delved into research on fasting online, but much of what I found was pseudoscientific drivel about getting rid of mysterious and unnamed toxins in the body. Recommendations for fasting were often coupled with such staples of alternative-medicine junk-science as colonic irrigation and worse. But I happen to be a mild hypertensive myself and for various reasons have been off my blood pressure medication for a couple of months. I thought I might try fasting as an experiment, to see if it made any difference to my blood pressure, but also out of sheer curiosity about what the experience would be like. My wife, who had also read Hendricks’s article in Harper’s, said she would try it, too We decided on a seven-day fast — somewhere between Hendrick’s experience and Mosley’s recommendation. The plan was to go a full week without eating or drinking anything except water. Lest our bodies react to this insult by trying to slow down our metabolisms, and we end up just lying around and not getting anything useful done all week, we also planned to stay energetic by engaging in vigorous physical exercise for at least a couple of hours daily during the fast. Neither one of us had ever done anything of the sort before. Since my wife had a week’s break in February from her work as a schoolteacher, we decided to try our fast then. Our preparation was pretty minimal. I would keep a journal in which I would record my weight, blood pressure, activities and, several times a day, just note how I was feeling. We bought some emergency supplies in case one or both of us ended up feeling ill or fainting: some energy drinks, a couple of bars of Swiss milk chocolate, some fruit, and some bread and cheese, and put them in the refrigerator. My wife also told me to stop locking the bathroom door from the inside, just in case she needed to rescue me. If my wife asked me a question, it took about five seconds for it to register and another five before I could formulate and deliver a reply On our final day before beginning, we measured our weight, blood pressure, pulse rate, and waist size. My wife and I don’t normally eat breakfast (she has a cup of coffee and I drink a Coke Zero — yes, yes, I know it’s bad) but that day we had a light lunch and in the evening we had an early dinner of chicken, potatoes, broccoli, cauliflower, and brown rice. And some chocolate pudding. And then we stopped eating. The scientific data on the benefits of fasting are still thin and far from conclusive: you can find a useful summary in a recent article on intermittent fasting by David Stipp in Scientific American (11 January 2013). Mark Mattson, head of the National Institute on Aging’s neuroscience laboratory, thinks it is possible that fasting is a mild form of stress that stimulates the body’s cellular defences against molecular damage. And even intermittent fasting can increase the body’s sensitivity to insulin, thereby decreasing the risks of diabetes and heart disease. A study conducted at the Salk Institute on mice has shown that, even when allowed to gorge on fatty foods for eight hours a day, those mice maintained a normal weight and insulin levels so long as they were fasting the rest of the time. Another study led by Mark Mattson in 2007 showed significant reduction in both asthma symptoms and indications of inflammation in humans through long-term alternate-day fasting. Some nutritionists are sceptical, and especially worry about the dangers of compensatory overeating in the times one is not fasting. In a 2010 study, also at the National Institute for Aging, fasting rats mysteriously developed stiff heart tissue, reducing their hearts’ ability to pump blood. Though, in general, caloric restriction by 30 to 40 per cent has repeatedly been shown to extend lifespan significantly in various animals including fruit flies and rodents, it is not yet clear what long-term effects such dietary regimes have in primates. Even if we don’t yet have enough data for clear conclusions, there was enough material from my research to intrigue me to try it for myself. Our weeklong fast was a little unusual as we also engaged in strenuous exercise every day. Sometimes a little too strenuous: one day we did a 14km (8.6 mile) trek through Alpine snow at a place called the Rodenecker Alm near Italy’s border with Austria. This was almost four hours of climbing and descending after three days of total fasting, and it left us quite exhausted and sore. But the odd thing was that to both of us it actually felt easier in this fasting state than it would have under normal conditions. So one does indeed seem to have a lot of physical energy while fasting, as Mosley has argued. Things were not perfect, however. My wife had to break the fast at the end of day six (at my strong urging) because she neither felt nor looked well. I did make it through the whole seven days without any physical problems but I was psychologically exhausted by the end of it and euphoric that it was over. In everything that I had read about fasting, days two to four were supposed to be the most difficult. I had also worried about getting headaches or other physical discomfort (stomach cramps, dizziness, etc), and especially about being unable to get restful sleep: I thought I might be awakened by hunger pangs. As it happens, none of those troubled me. Which isn’t to say it wasn’t an odd experience. First of all, every single one of the seven days felt exactly the same: mornings were completely fine and I felt pretty much as I normally do until about lunchtime. I tried to pack in any work, especially work that required mental concentration, into this period of each day. After midday, I became a little fidgety and found it hard to concentrate on anything. I had much more than usual amounts of physical energy and did all kinds of household chores happily, such as defrosting and cleaning the refrigerator one afternoon (anyone who knows me will testify that this is highly unusual behaviour). But my mind flitted from one thing to the next, and my reactions were slowed down very noticeably by evening. If my wife asked me a question, it took about five seconds for it to register and another five before I could formulate and deliver a reply. In fact, I became decidedly cognitively impaired: one day after taking a shower and shaving, I applied aftershave lotion to my face and noticed that it didn’t have the mild sting it usually does. That is when I realised I had not actually shaved. I just thought I had. So the days were hazy at times, but very bearable. Not so the evenings. By far the worse time was between 6pm and 10pm in the evenings. It was in this window every day that my wife and I both felt a physical and mental unease resulting in great difficulty in just passing the time. We tried to watch TV or movies but it was hard, and the evening seemed strangely empty. If you are trying to solve problems in the theory of quantum gravity, it’s probably best to get some food down In fact, the biggest surprise was just how much more time we had on our hands. I was struck by how much of the day I normally spend attending to my digestive needs: thinking about what I would have for lunch or dinner; shopping for groceries (which we do almost daily); cooking — in my case, elaborate Pakistani meals most evenings; then actually eating, washing dishes, cleaning up, even moving one’s bowels. Eliminating the simple act of eating frees up much more time than you’d think. In addition to the couple of hours of daily exercise we kept up throughout, we took long walks in the mountains (we live in the Alps), did crosswords (rather slowly), surfed the net and fooled around on Facebook, and we still always had more time to fill. I realised that meals provide needed punctuation to the day, and without them our days seemed strangely lacking in structure. So what about the medical benefits? In the end, both throughout and after the fast, my blood pressure remained at exactly the same, slightly elevated level it had been before I started. So much for controlling it by fasting, at least for me. I lost 11lbs (5kg) over the week and gained 7lbs (3kg) back within three days. The other significant thing I noticed, as many others have too, was the reduction of libido to absolutely nothing. I had no sexual thoughts all week, which was not an entirely unwelcome (though thankfully temporary) break from the usual. I experienced a phenomenal increase in physical energy but at the expense of a lack of mental concentration. So if you need to lose some weight and also need to dig some ditches this week, fasting might be just the thing. On the other hand, if you are trying to solve problems in the theory of quantum gravity, it’s probably best to get some food down. These effects lasted only while I was actually fasting: one day after breaking the fast, I felt completely normal, with the same appetite and level of physical energy as usual. At this point, I should remind my gentle reader that my weeklong experiment had the grand sample size of one (two, if you count my wife) and so should be taken for what it is: just my personal experience of fasting, not a scientific study. Did I feel any different from normal in the days immediately after the end of the fast or since? No, not really. Would I do it again? I doubt it. Though it was fun in its own peculiar way. Well … maybe. But I think I’ll at least wait for more controlled clinical evidence to come in. | S Abbas Raza | https://aeon.co//essays/is-fasting-good-for-you-what-we-know-so-far | |
Anthropology | Figurines, fishers, bugs and bats – how things in the world become sacred objects in a museum | I’ve been nursing a gentle obsession with a quartet of bone-white, thumb-sized figurines. I first saw them, lined up in a row, on the cover of Miguel Tamen’s book Friends of Interpretable Objects (2001). They rested in a pair of open hands, looking toothy, and vital, exuding a cool glimmer, while evoking the long Arctic night and the estranging cold. And yet they’re also tiny and personable, these figurines. Their smooth features beckon you to enfold them in the palm of your hand. Their heads are cocked at mad angles, and their leering eyes and rabid smiles bespeak a secret, conspiratorial sociability. In his book, Tamen contends that objects — art objects in the first instance, but by extension the many things upon which our fascination fixes, such as shells, stones, stars, milk bottles, leaves, and lamps — take up power and life with us as we incorporate them into ‘societies of friends’. The figurines on the book cover present an admirable picture of such a society — antic and charismatic friends, whose secret stories one desires to know. As it turns out, the figurines are never mentioned in the text. It’s likely the book’s designer chose the image, seizing on this quartet of seemingly interpretable objects without paying much attention to the manuscript itself. The book’s back cover offers a bit of metadata: a caption, which describes the objects as ‘devil figures’ with a provenance of Tuktoyaktuk in Canada’s Northwest Territories. The caption says they were ‘carved from the teeth of a blue whale’. I haven’t been to the Northwest Territories, and I have never seen a blue whale. But I know that they do not range as far north as Tuktoyaktuk, and I know that they do not have teeth. The objects on the cover of Tamen’s book were likely made from the teeth of beaked whales — dolphins, orcas, and their kin. Perhaps the teeth belonged to belugas, which are still hunted by the Inuvialuit people of Tuktoyaktuk, a remote village in Canada’s Northwest Territories, known in recent years as the northern terminus of one of the wintertime highways in Ice Road Truckers, a reality series for cable TV. Further search turns up a link to the original picture at the image bank Corbis, where the note about blue-whale teeth likely originated; the caption reads: ‘A person holds devil figures carved from teeth of a blue whale, Tuktoyaktuk, Northwest Territories, Canada’. The photo was taken by Lowell Georgia on 1 August 1980; its ID number is LG002968. These teeth thrummed, their ivory timbre sending songs across submarine canyons and ice-hung plains of shingle While the caption might err in identifying the carvings as the teeth of a blue whale, it’s not far off in calling them devils. The little ivory characters are examples of tupilaq, a genre of carved critter widespread among the Inuit and other peoples of the far north. The tupilaq that live outside of museum time, outside of gallery time, are evil spirits called into being by a shaman for the purpose of making mischief. They carry curses to rivals and enemies. Made from bone and fur and other materials, the tupilaq are powerful magic — and dangerous for those who wield them, for if discovered, their powers turn back on their users unless an immediate public confession is made. Secrecy and darkness are the native habitat of the tupilaq; they lose their power when exposed to the sociable light. But I’m not interested in scolding Corbis or Lowell Georgia, whose photo marvellously evokes the capricious spirit of the tupilaq for one who never has been so far north. For now, I’m interested to note the ways in which collectable objects weave shadows and ambiguities around themselves. The light-skinned hands holding the tupilaq in the photo manifest some degree of control over the carvings, but of a kind that can never be total. Objects arrive webbed in connections, and hoard their most intimate gestures and relations in unreachable treasure-houses. A collected object is a kind of vessel, freighted with an irredeemable record of acts and things, inaccessible worlds of sense and event, a tissue of phenomenal dark matter caught up in time’s obliterative machinery. The tupilaq, after all, were made from the teeth of an animal whose warm blood surged against tide and ice. These teeth dragged bleeding prey into the black, and tore banners of bubbles through holes in the ice. These teeth thrummed, their ivory timbre sending songs across submarine canyons and ice-hung plains of shingle. Torn from the reek on the blood-soaked shore, these teeth were plucked and cleaned and polished, and carved into devils meant to breed bad luck on a neighbour’s lodge, his wives, his weapons. Every gesture, every practice of craft and magic and the secret haunts of commerce, took these teeth into a new domain: out of the carbon cycle and into the symbolic. Nor quite this, in the end — for any straightforward dichotomy between the natural and the cultural, the material and the symbolic, is complicated at every instance by qualities that refuse neat abstraction. Toothed whales use their teeth for communication; a porpoise’s charismatic smile tells a story; dolphins deploy the acoustic properties of their teeth to issue warnings and threats. Rooted in the jaw, the tooth likely aids a whale’s perceptual work, its capturing and filtering of sound in the marine environment. Forged in an organismic manufactory, tooled by genes (it’s symbols all the way down), a tooth takes its place for a time in a network of perception and action: catching the piercing resonance of whale song bounding in the deep canyons — testing and metering the shifting temperatures of Arctic air — tearing and gripping the trauma-tautened flesh of smolt salmon. In any case, the ebb and flow of human symbolic culture into which these severed and carved teeth are plunged is never so bounded and legible as our captions and ethnographic accounts propose. These, too, have their tidal forces, responding to the wax and wane of matter and energy, and the disposition of their effects. There is no final shore of culture upon which these teeth make neat landfall; there is only a long-filtering estuary where tide and sign mingle intimately. Drying in the cold wind, a stranded tooth lies on the sand until it marries beauty in some hunter’s mind, beginning a new transit through channels of sense and symbol. And no matter how figured it becomes, by spells and song, or metadata and scholarship, the tooth remains exposed to the elements. In this fraught landscape of intermingled effects lies a further twist in the tupilaq’s tale, an intertidal eddy: these charismatic ivory carvings are a product of the mid-20th century. Before the advent of an outside market for Inuit art sparked their manufacture, a tupilaq would have been made of perishable and sometimes unspeakable matter (such as the corpses of children), contrived not to represent the spirit but to call it forth. By contrast these friendly, spirited carvings were done at the request of white traders and travellers, who wished to see what the ‘real’ devils — the ones evoked by the dirty, crumbling effigies — actually looked like. These devils were summoned into being by a collision of cultures. Indeed, the tupilaq have long been popular items in the Inuit art trade, where they fetch their makers a tidy price — commerce weaving another veil of secrecy, behind which the tupilaq themselves dance and hide. I want to understand how things come to take their place — especially in museums and collections — as embodiments of knowledge, artefacts out of time and nature, provoking curiosity and wonder. How they become objectified. The French philosopher Michel Foucault understood the natural history museum as a kind of republic of objects fixed and ordered in their relations. Of course, those relations change with changing science; yesterday’s taxonomic specimens become today’s harbingers of climate change. This is not to say that the specimens are not friendly to science, that they cannot help us to tell stories about the world. But I want a museum with the modesty to realise that the objects of its interest do not take their sole, true, or final form beneath its gaze. As seen by science, objects withdraw their auras — burning coronas that connect sense and experience to the deep past — and when the galleries and museums are in ruins, they will expose new banners to time’s unfolding. The tupilaq are players in a luminous, long-durée ecology — one in which paintings and pelts, sculptures and scarab beetles, clay pots and crania change states and meanings; negotiate mingled dimensions of nature and culture; and become consumed, even as they consume our attention. While at college, I landed a work-study lab-assistant position at the Field Museum of Natural History in Chicago. I found this an immensely exciting prospect: the Field Museum (in particular its dinosaur skeletons and its collection of totem poles) had loomed large in my childhood understanding of science, as did natural history itself, as a way of organising experiences of the world. Of course, it was live animals I wanted to get close to — the more charismatic the better — but, unlike the lives they seemed to lead in classic nature documentaries (the kind of close-up ecology depicted in the pages of National Geographic magazine), animals in nature were thinly distributed, cunningly hidden in the fields and mud-smeared woods near my childhood home. I had relished visits to the Field Museum: it seemed a place of dense variety, in contrast to which the wider world — at least the part of it I found growing up in the patterned fields of Central Illinois — seemed diluted, intent on hiding tiny quantities of insight and wonder amid swirling, monotonous tides of grey and green. My job at the Field Museum was in the Mammals Division, working in the specimen-prep lab. The job took me deep into the backstage world of the curatorial departments, a naphtha-scented realm of dioramas and musty cabinets, entered by way of a cunningly hidden elevator behind the colonnade in the museum’s enclosed façade. Stepping out from the elevator, one emerged into a long, dim corridor where museological tools such as disused vitrines and dusty signs were deposited. The specimen-prep lab, a warren of bright rooms off the corridor, hosted an array of familiar laboratory gear — a fume hood, a steel dissecting table, lab benches cluttered with Pyrex glassware, pipettes, and sturdy boxes of smooth-textured cardboard. Dusky Seaside Sparrow specimens at the Field Musuem, Chicago. Declared extinct in 1990, by 1980, six remaining individuals, all males, had been captured to establish a captive breeding programme. No females were ever found. The males lived out their lives in a Walt Disney World nature reserve called Discovery Island. The last died in June 1987. Photo by Marc Schlossman/PanosThere were distinctive furnishings as well: in particular, the maceration tank, a giant stainless-steel pot on a pedestal, a huge pressure cooker used to boil large specimens down to bones. And, behind an airlock-like set of self-sealing doors, the dermestid room — named for the swarms of beetle grubs that seethed over small skeletons, picking them clean. Outfitted with variously sized glass tanks full of grubs, this room was a secure space, with blowers supplying negative air pressure, and seals around the doors, to ensure no beetles or larvae could escape. Upon leaving the dermestid room, you had to stand in the airlock and brush down your clothes. There was an aroma of putrefaction in the room, but it was faint — you got used to it. The sound, however, was oppressive. The place hummed with a static song of tens of thousands of beetle grubs, hairy and grey, all chewing at sinew and dried muscle. The holotype is a heady, almost absurd designation: an animal sacrificed to represent a life form in its entirety, its desiccated skin and loose, lacklustre fur or feathers standing as avatar for the flashing, teeming, endlessly various individuals of the species Our task in the specimen-prep lab was to transform dead animals into data. The products of our work were not the taxidermied simulacra that posed behind glass in the galleries, but study skins and skeletons for the research collection. These were stored out of public view in open-topped archival boxes, which fitted closely together into broad, shallow trays that rested in rank upon rank of shelving, forming a library of the dead. Although to call the specimens dead does not sound quite right. For the specimens had transcended or exceeded death, had passed beyond its dominion by means of a process that arrested, ostensibly in perpetuity, their participation in the carbon cycle, the wheel of disarticulation and recombination, that is life on earth. The collection was not comprised of equals. Enjoying pride of place among the trays were the holotypes, singled out as exemplars of their species. Set off by their yellow tags, type specimens are often much older than their preserved confreres. In most cases, they document the discovery of a species — although of course they’re rarely discoveries in the strict sense of the term. Instead, they’re symbols of a species’ scientific acknowledgement, of the moment when a local variant achieves a Latin binomial and a place in a refereed journal. The holotype is a heady, almost absurd designation: an animal sacrificed to represent a life form in its entirety, its desiccated skin and loose, lacklustre fur or feathers standing as avatar for the flashing, teeming, endlessly various individuals of the species. However, in dialogue with the singularity of the holotype, it is the specimens in aggregate that give voice and scheme to variety. Any given tray of specimens both expresses and effaces vital relations among discrete creatures; a tray of voles might have all been collected in the same forest or glacial basin, where they comprised a community of bodies jostling, mating, and competing with one another. Other specimens lying nestled together in a case, by contrast, might never have run across one another in life. Now pristine, beyond birth and death, predation and putrefaction, they offer themselves up as information, an apposition of time and place with diameter of nostril, length of genital vent, and body weight in grammes. All of these data gather to sketch the shapes those vital relations take, the specimens in death comprising a kind of rump parliament of the abstract. We in the lab, too, were specimens of a type, denominated ‘preparators’, the inflection of the neo-Latin -ator conferring our place, substantial albeit subordinate, in a guild hierarchy. However, it’s the ‘prep’ that captures my attention now for its reformulation of this mortuary processing, this penultimation, as something prefatory or prior. Through our craft, we preparators erased all traces of the troubling carnal processes by which furtive or formidable animals had been reduced to things. By cutting and pulling, stretching and cleaning, we set the clock back to zero. I want to recall the procedures, the precise craft methods, by which we made such data from the dead. Many of the specimens we received were roadkill, swept up by highway work crews and game wardens. Often in the morning, we were greeted by a fresh delivery of dead birds and bats — a donation of McCormick Place, the vast, glass-clad conference centre on Chicago’s lakeshore, whose dusky windows met windblown migrants with unforgiving solidity. We placed these winged specimens in individually dated bags and tossed them into a freezer for later processing. I was especially taken with road-killed fishers, mustelid carnivores of the northern Midwest. Not found in the southerly woods where I grew up, these dog-sized weasels seemed breathtakingly wild to me — even in their frozen morbidity, their ragged costume of matted fur, blood-filled nostrils, and opalescent eyes. The limbs I would unroll from their sleeves of skin and fur down to the joints of the feet; then some cunning work to liberate the delicate leaf of the digital bones from its external glove of paw The work of preparing a study skin consisted in several discrete steps. Take a kangaroo rat recently arrived from the Brookfield Zoo: now carefully thawed, still cool from the refrigerator, it lies on the steel table in a loose, parenthetical curl. After first making a series of measurements — length of body, tail, and foot, weight in grammes — I would write out a tiny tag with the relevant geographical coordinates, date of collection, and all-important accession number, all inscribed with an indelible rapidograph pen. Before arriving in the museum, specimens collected by scientists in the field had already been subjected to a great deal of informational dissection: external parasites identified and censused, blood and perhaps other tissue samples collected, and finally, the subject itself euthanised and frozen. But in any case, it was my act of inscription in the prep lab that marked the crucial divide, serving as a scientific sacrament by which the transformation into data was signified and enacted. The dead animal was no longer roadkill, no more the victim of snare, trap, or suffocation; it had been accessioned, its death subsumed in the act of collection. The transformation, however, was far from complete. Next, I would make an incision from the breastbone to the genitals, taking care not to cut through the abdominal wall. Slipping gloved fingers between skin and fascia, I began tediously to work the intact inner body free from the pelt and out the ventral slit I had made. The limbs I would unroll from their sleeves of skin and fur down to the joints of the feet; then some cunning work to liberate the delicate leaf of the digital bones from its external glove of paw. A great deal of care needed to be taken as well around the facial features, detaching them gently from the skull, trying to avoid leaving a fringe of hair attached near the brows, or a tear in the soft nasal tissues. Invariably, there were lost pieces of information: tiny shreds of pad or fascia that stayed adhered to bone as the rest of the animal’s exterior was torn loose from its sinew-clad skeleton. Often, the extremities would be cut away at the distal ends of the long bones, the paws left intact to hang heavily from the hollowed-out skins. The tail, too, needed tedious unworking from its sleeve of hide — a slow peeling back, bone by bone, to free the pink-sheathed caudal vertebrae, without turning the skin inside-out. A loose sock, a sketch, a gesture vaguely allusive to animality, the skin would now be filled with cotton-wool and pinned out to dry. Organs plucked free, the sticky, flayed rack of bones would then go into the dermestid room to commune with the beetles. For all its systematic nature, our work bore a family resemblance to that of the shaman fashioning his tupilaq: in the prep room, we transformed ephemeral animal matter into storytelling objects. Accessioned, tagged, and arrayed in their boxes, the skins and skeletons were meant to fix and identify connections that bespoke genetic and geographical variation; they were artefacts of phenotype extended in space and time. Likewise, the tupilaq carving is never complete and self-sufficient, but exists as a node, a kind of handhold in a route tying together predation and consumption, spells and songs, the pulses of beauty and value that push it through markets and museums. The prepared specimen, too, acts as a kind of node, articulating its position by way of triangulated theory and data, drifting along tidal shifts of paradigm and intellectual purpose. Of course, the texture of that tale of variation shifts through time. Natural history museums have never been monolithic, but have changed with the social and institutional make-up of science and its audience, from early modern cabinets of curiosity, to disreputable dime museums, to neoclassical forums for public edification. The Field Museum is of this last type. Founded in the wake of Chicago’s 1892 Columbian Exposition, it expressed the epistemology of natural selection in its post-Victorian incarnation: variation unfolding in stately, steady grandeur, an ordered autotelos reified in row upon row of mothballed specimens. The advent and acceptance of Mendelian genetics; the growing influence of biogeography; the discovery of DNA, the rise of systematics, genomics, and computational biology — all of these are so many subtle and shifting re-readings of the collection, stories wrung from desiccated data, spirits evoked by these effigies of natural history. In its ordered cabinets, the specimen collection superimposed and coordinated two different kinds of space. On the one hand there was the hierarchical logic of the classification scheme: specimens disposed throughout in boxes, sliding shelves, and jars according to the taxonomy, from kingdom to class to specific epithet. Intersecting this paradigmatic plane was a geographical dimension evoked virtually, via metadata, with each specimen’s place of collection tagged and noted. Ideally, these two planes interacted in the museum like a multidimensional slide rule for natural history — one calibrated and operated by teams of expert operators, from lab techs to curators to field scientists. As museums emerged as research institutions during the 18th and 19th centuries, Georges Cuvier championed the epistemic leverage of these centrally coordinated spaces of natural history. From his seat as professor of animal anatomy at the Muséum national d’Histoire naturelle in Paris in the late 1790s, Cuvier argued that the observations of the naturalist in the field ‘are broken and fleeting’, compared with the powers of the sedentary savant: The sedentary naturalist, it is true, only knows living beings from distant countries through reported information subject to greater or lesser degrees of error, and through samples which have suffered greater or lesser degrees of damage. The great scenery of nature cannot be experienced by him with the same vivid intensity as it can by those who witness it at first hand … If the sedentary naturalist does not see nature in action, he can yet survey all her products spread before him. He can compare them with each other as often as is necessary to reach reliable conclusions … The traveller can only travel one road; it is only really in one’s study (cabinet) that one can roam freely through the universe …The objects disposed in the cabinets of natural history are varied in their qualities and their uses: not only study skins and skeletons, but whole creatures preserved in jars of alcohol, germ plasm samples in cryogenic tubes, and sundry other accessions. For purposes of natural science, specimens represent ‘raw’ data. Indeed, the term ‘preparation’ serves to ‘re-raw’, so to speak — for, like the tupilaq, the dead animals are already thoroughly ‘cooked’. And like the tupilaq, they are doubly mysterious. All the alien, zooic qualia of an animal’s life — each living creature’s unique contexture of sensations and responses, their breeding successes and failures, their urges bred from fear and separation and prey-drive — all these have been irredeemably eradicated in the natural history collection, left behind at the snap of a snare, rinsed down the steely drain of the prep lab’s necropsy table. Via the same fraught and tedious techniques, they are reinscribed, transformed into esoteric texts that are unreadable without peculiar stores of training, tacit knowledge, and craft practice. The stories they tell, the truths they were meant to exhibit and enact, are nowhere self-evident. An act of predation subsumes and reincorporates phenomenal animal affordances; the scientific sacraments of collecting and accessioning, by contrast, call forth abstract and motive truths, just as the expertise of the shaman reveals and directs the powers of the tupilaq spirits. ‘Relay #70 Panel F/(moth) in relay,’ reads the journal; below, in what is likely Hopper’s hand, the line ‘First actual case of bug being found’ Recall that in making the tupilaq, the shaman submits to a special code of secrecy: if the effigy is discovered or its influence identified, only his immediate public disclosure of the transgression will keep the spirit from turning back on the shaman and his client. The scientist, perhaps, assumes a similar burden in the cycle of knowledge production and publication. The promise of these acts and the artefacts they produce — monographs, journal articles, research bulletins — is the burden of responsibility laid on the museum’s scientific community by the collection. Unavoidably, or so it seems, these activities demand further acts of collecting and accessioning, which serve to keep the cycle in motion, the spirits restlessly on the move. In other kinds of museums, objects on display stand for the dark matter of the collection in storage. They are tokens of riches withheld — the vast stored collection not on display but catalogued and preserved as semiotic insurance in the vault. Natural-history collections (in particular those devoted to ecology and evolution, the paleontological and zoological collections) consist almost entirely of objects categorically unsuitable for display — objects that, like the tupilaq, derive their value and agency from the esoteric and out-of-sight dynamics calling them into being. Bewitched with thoughts of specimens and effigies, I recall yet another object that serially catches my fancy: the ‘first computer bug’, an archival curiosity collected during the Mark II computer-science research project led by Howard Aiken at Harvard in the mid-1940s. The Mark series was produced in a research programme driven in large part by the brilliance of the American programming pioneer Grace Hopper; the Mark II was one of its early electromechanical computers, instantiating computational logic in a vast, greasy array of switches, shafts, and clutches. Hopper and her colleagues kept a log of their programmatic experimentation, ordering by date and time a journal of the instructions they fed into the machines and the results returned, with terse notes detailing breakdowns and other machine behaviours. Typical lines read ‘0830, Started machine’ and ‘6th degree polynomial Registration trouble most of morning’. In effect, the logbook, which resides in the Smithsonian’s Museum of American History in Washington DC, is a record of early computer bugs rendered in precise, empirical terms. And in those records, one bug stands out: on page 92, at 1545 on 9/9 1945, an actual moth was fixed to the page with a piece of tape. ‘Relay #70 Panel F/(moth) in relay,’ reads the journal; below, in what is likely Hopper’s hand, the line ‘First actual case of bug being found.’ In record photographs of the journal book, the moth is desiccated, the tape yellow and wrinkled. Looking for guidance on the nature of this specimen, I wrote to the evolutionary biologist E O Wilson, whose seat as professor at Harvard’s Museum of Comparative Zoology offers an ideal vantage from which to survey the status of this first computer bug. ‘I’m looking at an object in the Smithsonian that purports to document the first computer “bug”,’ I wrote to him in an email. ‘I’m interested in the moth as a kind of type specimen … More broadly, I wonder if the “bug” might make an interesting object to muse upon tensions between technology and the natural world — and I’m sure your reflections on such a connection would be invaluable.’ It’s clear that engineers working on the Mark II were familiar with this usage; the ‘actual’ bug was entered into the log as a cheeky aside, a bit of lab humour A couple of weeks later, Wilson sent me a reply. A quarter century or more ago, he wrote, he had asked Hopper herself about the fate of this ‘national treasure’, the first computer bug; she had reported that it had gone to the Naval Museum in Washington. Wilson encouraged me to track it down wherever I might find it, and to get an entomologist to identify the species of moth. And, ‘if in the process you can euchre it out of whoever has it,’ he concluded, ‘and bring it to Harvard’s Museum of Comparative Zoology, you will be a local hero.’ I doubt I’ll be able to pry the bug loose from its current home, the Smithsonian — and in any case, I’m not sure it would be curatorially fitting to do so. For in fact, Hopper’s moth is not the first ‘computer bug’, nor does it furnish us with the origin of the term. Already in the late-19th century, technicians in Thomas Edison’s lab were using the word ‘bug’ to describe thorny technical problems. From the syntax of Hopper’s notes in the journal, it’s clear that engineers working on the Mark II were familiar with this usage; the ‘actual’ bug was entered into the log as a cheeky aside, a bit of lab humour. Bug is an ancient word, and its use in specific reference to creeping arthropods dates only from the 17th century, according to the Oxford English Dictionary. Prior to that, the word named the nameless: bugbears, monsters and creatures of mystery and shadow. ‘First actual case of bug being found’. In 1947, engineers working on the Mark II computer at Harvard University found a moth stuck in one of the components and pasted it into the operational logbook. Photo courtesy Naval Surface Warfare Center/SmithsonianThe moth that blundered into the uncanny machinery of the Mark II, following who knows what tracery of heat and light, was of no particular natural-historical significance. Furtive in its shabby grey fluttering, it would have roused no more than a raised eyebrow from Hopper and her busy colleagues servicing the Mark II — a momentary pause, a dropping of cigarette ash on the console. Only later, upon its post-mortem discovery, was this dead creature turned into data. Now roughly preserved and enshrined in the Smithsonian, the dead insect serves as holotype for the computer bug. Like the tupilaq, computer bugs are ungovernable spirits evoked by a kind of transubstantiation. As the uncanny architecture of the computer unfolded itself in Harvard’s labs, the bug found its way not only into the machine’s works but into a new role as an object in our midst — a role that took its place among the object’s other histories and meanings, its penumbra of qualities. This patterned assemblage of purposes, roles, and given characteristics, this accidental and ephemeral fate, I want to call by the name habit. An effigy, an insect, an animal’s measured, pinned-out pelt — we have our ways of domesticating these objects, of bringing them to ground, fixing them in amber or in print. The precise practices vary with what habits we bring to bear (from science to shamanism) and the collections they inhabit. And here is a clue — for dwelling in the word ‘inhabit’ is ‘habit’ itself. What if the habits in question are not ours, but those of the objects themselves? A habit is not only a way of acting, but also a costume of a kind. Some objects — books, dice, celery stalks, lens caps — have deeply ingrained habits, while others — seashells and stars, perhaps, but also bottlecaps, icicles, and plastic six-pack yokes twirling in the mid-ocean gyre — wear their habits more lightly. And some objects take on the habit of naphtha and indelible ink, of cotton wool and alum, of cabinet drawer and taxonomic order. The word ‘habit’ catches for me a sense of the shoddy assortment of qualities that knits an object into the fabric of things, weaving into one whole its social roles, the cultural codes it keys, and its whence-and-whither entanglements with deep time. I opened the freezer and was greeted by an earsplitting call: a kind of wet stridulation at the far end of hearing, its frequency so high it seemed to bore into the jaw. It was the cry of something alive In its particular death, the Mark II moth took up its own habit of data, clothing itself in the raiment of anecdote and explanation. But had it seen its selective mission — depositing eggs in the folds of Hopper’s wool coat hanging from a hook in a Harvard lab, only then to gently expire in a dusty corner — had it escaped to be snatched by a bat, one like those who smashed into the windows of Chicago’s McCormick Place with heedless abandon — in these or any myriad number of alternative cases, its habit would have taken different form and colour. Yet the story of any such habits will in the long run prove as ephemeral as the moth’s wings, trapped like shattered kites beneath the cellophane. Grace Hopper’s moth became the holotype for the computer bug in an instance of creative misunderstanding, betokening our need for clues, artefacts, and documentation. It’s tempting to see the whales’ teeth in a similar light — the blue whale, after all in its vast placidity and inscrutability offers a tempting totem, a synecdoche of oceanic feeling. And perhaps the moth in the machine furnishes a way to re-nature the computer, to redeem life’s agency in the midst of computation. Once, at the Field Museum, a specimen managed to doff the peculiar mortal habit of data altogether. In the prep lab one morning, I opened the freezer and was greeted by an earsplitting call: a kind of wet stridulation at the far end of hearing, its frequency so high it seemed to bore into the jaw. It was the cry of something alive. I began to dig through the piled-up paper bags of frozen specimens, spilling them in heaps on the tiled floor. Finally isolating the bag from which the plaintive alarm issued, I found a little brown bat — a winged walnut pulsing rapidly with breath, alive but too cold to fly. It had arrived with a shipment of window-broken birds from McCormick Place the previous afternoon, some 12 hours before. It’s telling that I struggled with the question of what to do — after all, the bat still was a specimen. Or was it? It had been collected; it was on the threshold of preparation. And yet its vital determination exceeded the bounds of the relevant rituals. Eventually, my prep-lab colleagues and I reached a suitably scientific-seeming conclusion: a bat that survived the night in our freezer was a bat that ought to have another chance at the gene pool. And so we found a window overlooking Lake Shore Drive, cracked loose its long-disused latch, and held the bag open in the wind. After a long moment, the bat fled in a blur, disappearing into Chicago’s booming late-autumn breeze. It disappeared into the invisible cabinet of its unmeasured curiosity, its habit secreted in the wind. | Matthew Battles | https://aeon.co//essays/a-museum-s-cabinet-of-curiosities-is-also-a-chamber-of-secrets | |
Knowledge | I have books on chopping wood, baking bread and butchery. But what do I really know if I never set foot in the forest? | Anyone serious about their firewood will tell you that they get theirs early in the spring. They will also tell you that the best way to dry your wood is to chop it into short logs and arrange them in loose airy stacks somewhere that the wind can get at them. By weight, dry wood of all kinds releases roughly the same amount of energy when burnt, so the experienced hand knows to look for dry and dense wood and is not easily seduced by volume. I did not know these things until recently, but now that I do, this knowledge seems essential. It makes me a better man. Most of what I know about firewood comes from a Norwegian book entitled Hel Ved (2011), which roughly translates as ‘solid wood’. It was written by the novelist Lars Mytting and contains detailed discussions of every aspect of sourcing, chopping, drying, storing and burning wood. There are tables in the back listing the drying rates and percentage of ash you can expect from different species of tree. There are references to research conducted by something called the Norwegian Institute of Wood Technology. Mytting is serious about his firewood. This is the kind of book I knew I had to read before I had even seen a copy. I grew up in Norway, but now live in the UK. Fortunately, my parents could be convinced to send one to me, and I read it from cover to cover the weekend it arrived in the post. I have since studied parts of it as though preparing for an exam. This is not out of character. The book sits comfortably on my shelf next to a number of other, similar titles. One discusses, in exhaustive detail, the virtues of various flours when baking bread; another contains detailed instructions for constructing a back garden wood-burning oven; another still provides butchery instructions for every bird and mammal found on the British Isles, as well as some found only elsewhere. Collectively, they form a kind of eccentric survivalist reference library that takes up almost an entire Ikea Billy bookshelf. There are some individuals for whom such a reference library makes undoubted practical sense. If you live in an extremely rural community, some mastery of this knowledge is likely essential both for comfort and survival. I, on the other hand, live in central London. The two pigeons nesting in our chimney suggest that it is a long time since the disabled fireplace in our converted Victorian terrace last saw any action, and the local council would put an end to any aspirations I might have for a wood-burning oven in what can only with considerable generosity be called our back garden. There is, in short, perhaps some absurdity in my taste for survivalist literature. Mine is an emerging masculine ideal that requires a man to possess knowledge of a particular kind And yet I am not alone. I belong to a growing community of woodsmen, butchers and craftsmen of various kinds who ply their trade from the armchair. Hel Ved has sold more than 150,000 copies in a country with a population of just under five million, and has spent a year on Norway’s nonfiction bestsellers list. Following its success, Norway’s national broadcaster, which has a public service mandate similar to that of the BBC, aired a 12-hour television programme about firewood; one in five Norwegians watched at least part of it. An international audience also seems to have been captivated, with popular stories following on the BBC and in The New York Times. Evidently the audience for this kind of book extends well beyond the ranks of those to whom it might offer genuine practical value. A book such as Hel Ved is, in any case, not the portable and durable field-guide that the serious practitioner would require. These are weighty, fully illustrated tomes, designed for a coffee table. They are the reason the height of Billy bookshelves can be adjusted. What’s more, it will come as no surprise that much of this audience of armchair woodsmen are, like me, male. I belong to a demographic of young men who are increasingly hungry for the kind of knowledge these books supply, and who proudly display our survivalist libraries as a mark of status, rather in the way that young aristocrats once displayed their Grand Tour portraits. In addition to revealing much about firewood and butchery, my growing library therefore suggests something important about contemporary ideas of what it is to be a man. And as a philosopher, I find myself pondering the true nature of those ideas. ‘Hel ved’ means ‘solid wood’, but it is also a Norwegian expression denoting someone of a sound and reliable character. This translation gets much closer to explaining why Mytting’s book and others like it command my attention. They are as much about cultivating a certain kind of character as they are about their particular subject matter. In moments of reflection, I often find myself worrying that a real man knows some things that I do not. He knows how to go out in to the woods and return with usable firewood. He knows how to butcher a pheasant or a squirrel. He knows not just the ideals of nose-to-tail cooking, but the grisly mechanics as well. One reason I might want to know how to butcher a pig or to stack firewood is that I might be required to do so. Another is that I want to be a better man. Perhaps I am in the grips of the old, outmoded ideal of man as the provider, the hunter-gatherer. This is the vision of manhood embodied in Ernest Hemingway’s solitary heroes, forged through battle with noble adversaries in the deep ocean or on the plains of Africa, or, at the very least, in the bullring. Or perhaps I have been romanced by the same sirens that caught Henry David Thoreau’s ear and drove him away from the city and back to nature at Walden Pond. The particular ideal I have been preparing for, however, strikes me as both new and different from these predecessors, though each still survives in various forms. In the older cases, the knowledge the man requires plays what philosophers call an instrumental role. Both heroes have independent aims and aspirations, and knowledge is of value only to the extent that it can be enlisted in service of these independent aims. Many of us have opted instead to perform a kind of alchemy, transforming what was once of instrumental value into knowledge valued for its own sake The knowledge that I am pursuing, by contrast, appears to be a case of knowledge for its own sake. It will not make me any better at flourishing in my actual environment, or in an environment to which I secretly long to escape. It has almost no practical application in my life, but I desire it nevertheless. Though I do not mean to endorse it over others, or to exclude women, mine is an emerging masculine ideal that simply requires a man to possess knowledge of a particular kind. There are some things a man of this kind simply ought to know. And this new ideal has evolved, I think, fairly naturally from those that came before it. As a child I was taught how to whittle a bow and arrow from the soft pliable branches of early spring. My grandfather encouraged my cousin and me to make a flagpole from a recently felled tree. He taught us to carry a small knife at all times. I can still gut a fish or shell a crab with ease. These lessons were no doubt imparted because of their presumed instrumental value, and with the hope that they would serve me as they had served my teachers. Yet as the world I live in diverged from theirs, this instrumental value diminished and, in some cases, disappeared entirely. I could have abandoned this knowledge altogether. Not wanting to go this far, many of us have opted instead to perform a kind of alchemy, transforming what was once of instrumental value into knowledge valued for its own sake. But what sort of knowledge are we left with? A few months after the end of the Second World War, during which he had served as an intelligence officer, the Oxford philosopher Gilbert Ryle made his Presidential Address to the Aristotelian Society in London. The paper he delivered was one of the first to draw a formal distinction between two different kinds of knowledge. Despite being a philosopher, Ryle appears to have been an eminently practical man. A former colleague of his once told me that his standard example of a good, virtuous activity was digging. It is perhaps not surprising, then, that Ryle saw himself as correcting an excessive intellectualism, one that he traced at least as far back as Plato’s tripartite division of the soul. In this view, there is a part of us that does the thinking, a part of us that does the doing, and what Ryle considered a mysterious ‘Janus-faced’ intermediary that connects the two, having its foot in both camps but resting in neither. Ryle found much to dislike about this picture. He noted, for example, that much of what we do — such as chopping wood — can be done intelligently or skilfully, as well as stupidly. According to the Platonic theory which he opposes, any intelligence exhibited in activity must be traced to the part of us that thinks. Doing something, like chopping wood, cannot itself be regarded as an exercise in intelligence. In the Platonic construction, chopping wood intelligently must be to chop wood while simultaneously performing a separate act of thought or theorising in which certain facts are contemplated in a way that guides the activity. Not all intelligence, or skill, or cunning, is expressed through thought. Some is expressed directly in action This picture, argued Ryle, is a mistake. The graceful dancer does not do two things at once, but rather one thing in a certain way. The same is true of the woodsman who chops wood intelligently. The intelligence is part of the performance and not separate to it. According to Ryle, intelligence can be exercised directly in certain kinds of practical performances. Not all intelligence, or skill, or cunning, is expressed through thought. Some is expressed directly in action. The same argument can be made by noticing that thinking and reasoning are themselves things that are done, and that they too can be done both cleverly and foolishly. In ‘What the Tortoise Said to Achilles’ (1895), Lewis Carroll engages his characters in discussion of a simple argument consisting of two premises and a conclusion. The Tortoise challenges Achilles to show him how logic could ever force him to accept the conclusion of an argument. Achilles responds by stating that if the tortoise accepts his two premises, then he must also accept the conclusion. The Tortoise understands, but asks Achilles to make this step in his argument clear by writing it down and adding it as a third premise to his argument. ‘Very well,’ says Achilles, proceeding to restate his argument. If the tortoise accepts these three premises, then he must also accept the conclusion. Again the Tortoise understands, but again he asks Achilles if he should not add this statement as a fourth premise in his argument, since the Tortoise must accept the conclusion only if he accepts this further claim. At this point, Achilles sees that there is no end in sight. He can keep supplying premises indefinitely and the Tortoise can keep asking why. What went wrong, says Ryle, is that Achilles assumed that knowing how to reason consisted in the knowledge of some proposition that can be written down and placed alongside the other facts of his argument. This, as the Tortoise made plain, is not so. If those who follow Ryle are right, teaching someone how to reason does not consist in teaching some additional fact or premise. There are two kinds of intelligence at issue, each corresponding to the exercise of a different kind of knowledge. Ryle calls these two different forms of knowledge ‘knowing-that’ and ‘knowing-how’. The former is knowledge of propositions or facts: for example, knowing that the square root of 81 is nine, or that the army of the Ottoman Empire once reached the gates of Vienna. It can be written down and communicated through books, which might be why it comes most naturally to mind when we think of knowledge. Knowing-how, by contrast, is better thought of as a kind of skill or ability. It is the knowledge we exercise when we demonstrate that we know how to reason, or ride a bike, or swim. As the Tortoise purports to make clear, it cannot be reduced to knowledge of the first kind. Knowing how to reason or ride a bike does not consist in a series of facts that can be written down and presented to the Tortoise. It is knowledge of a different kind entirely. This is why, while there are many excellent books about bikes and cycling, there are few books that will teach you how to ride a bike. If Ryle is right (and there are those who deny it), the relation between knowledge and my masculine ideal is more complex than it first appears. To say that there are some things that a man must know is not yet to say what kind of knowledge they involve, nor yet how that knowledge might best be obtained. If knowledge-that is all that is needed, then Hel Ved and books like it offer a viable route to becoming the kind of man I want to be. If, on the other hand, the knowledge I value takes the form know-how, then there are limits to how successfully I can pursue it from the armchair. Given the origins of my ideal, it is most likely a combination of the two. Books can lead me part of the way, but they will leave both me and the Tortoise unsatisfied. If I truly aspire to knowing some of the things that were once essential to men like me, I must acknowledge that some of them — some of the knowledge I now value for its own sake — takes the form of know-how. Books will never capture this part of the ideal. Like men before me, I must at some point leave the armchair and head out into the woods. I have read, at least, that early spring is the time to go searching. | Michael Gibb | https://aeon.co//essays/do-real-men-get-their-know-how-from-books | |
Stories and literature | City life is a constant, maddening hum. Only in a place like the Sahara can we hear the nothingness that revives | By the twisted, paradoxical nature of things, a 4x4 with a throbbing engine — the 4.2 litre non-turbo diesel in a Nissan Patrol or Toyota Landcruiser — is the best way to experience the single most entrancing characteristic of the desert, the one that sucks you in and leads you to all the others you had only glimpsed or imagined before: that is, the silence. The rattly, combustion-wheezing, air-filling noise of the car stops with a shudder and a grunt, and the silence rushes in to fill the vacuum. You can feel it sucking away from the ear not air but something finer, some granular constituent of the ether, maybe the secret ingredient of dark matter … Whatever it is, it gets sucked away, leaving your hearing more acute but with less to listen to. There is the gentle click of the bonnet metal contracting. The slamming of the door by the last person getting out. The sound of bare feet squashing out sand in a few exploratory steps. And then, nothing. If you walk (and after the engine stops, there is little fun in staying in the car), you find that answers appear to all the forlorn questions you had long ago given up expecting an answer to. The answer comes! But it might not be in words. That’s the craziest thing: to ask a question in words and receive a satisfactory answer that cannot be converted back into words without losing the essential ingredient. From my earliest youth, I have always believed that the Sahara is the big one, the Everest of deserts — super-arid and super-empty. You can’t really walk out into it. All towns, all villages kind of bleed into it through an ugly hinterland of square irrigated patches and ditches whose banks are white with salt, and odd lobotomised palms and clusters of acacia with a donkey or goats hiding in the speckled shade, making a noise. So you take a car. It whizzes you far out, though in fact three or four kilometres is enough by day. (You might need to go further at night. I’ve seen city lights glowing under the horizon 60km into the desert on a moonless night. It can spoil things, but not always.) Then the engine stops. The adventure begins. Every journey made for the first time is an adventure. There is an Arab saying that the donkey that brought you to the palace must be dismounted before you can enter If it’s windy, you still get it, the silence. But only at first. Then you notice the wind on clothes, or the rustling of a tarpaulin. If the wind is really high, it’ll be picking up sand and shooting it like mist over the ground — swirls, not cloudlike but dreamlike, silky, low-down patterns of the universe, all lit up. The desert is windy at certain times of the day, sometimes all day. You never go days without wind, but there are always periods of calm. Strangely, they often coincide with that moment of getting out of the car, with its big tough tyres and hot exhaust pipe and ticking, contracting bonnet. You start listening to the silence. You start listening for imperfections, proofs against its existence. Maybe the ticking is reassuring, but it grows less frequent, fainter. People look for proof of their beliefs when they are young, when they are charged with hope. Many give up at what seems a very early age. They prefer the comfort of denial, of nothing with a small ‘n’, a rubbish nothing, easily shouldered aside by music, appetites, money, entertainment and controversy culture, stuff. Cars. The car that got you here, which you can now leave behind. There is an Arab saying that the donkey that brought you to the palace must be dismounted before you can enter. So you listen for proof of noise. It’s like picking up a celestial phone: instead of an answering voice, instead of heavy breathing, you hear nothing. You listen to disprove the existence of silence. It is your first act, your natural act as a believer. You don’t believe. Belief is no use here. You listen and hear … nothing. I had been visiting Egypt regularly since 1993. My wife is Egyptian. And yet, until 2004, I had never been in the desert. I’d seen it, driven along its edge in an air-conditioned car, but I’d never experienced it. For me, the whole Egyptian experience was about the craziness of Cairo. I loved the endless labyrinths of the bazaars behind Khan el-Khalili, night-time drinking on the rooftop bar of the Odeon Palace, listening to music at the El Sawy Culturewheel, battling the six-lane traffic chaos of the Corniche, taking a battered Lada taxi home at dawn through the City of the Dead. But even then, in what would be the quiet times in any other city, Cairo would be humming, buzzing, invading my eardrums. A film director once told me that shooting exteriors in Cairo is a nightmare. Often they fake it, using Tunisian locations instead. The reason is the sound: the hum, they call it. You get it even if you shoot at 3am on Zamalek island, the wealthy garden district in the middle of the Nile. It’s the aural equivalent of smog; hardly noticeable at first, not a problem for many, but insidious, worming its way inside you, rattling you, shaking you up like a cornflake packet. Your contents never settle. Someone told me a story about a man who bought adulterated cocaine. A flake of aluminium sulphate lodged in his sinus and burned a hole right through his skull and into his brain. I pictured Cairo’s hum as a slow acid eating its way through the fragile bones of the ear, into the cortex. Things you can never escape feel like an invasion of the soul. They burrow into you. I could feel the noise, the hum, damaging my inner organs The smog was definitely getting worse. When I first started visiting the city, it could be bad in the centre some days but there wasn’t a problem further out. By 2004, if you went up to Wadi Degla, the nearest desert to the city, you would see a grey pall hanging over Cairo like a giant mouthful of infected phlegm. And even at Wadi Degla, though the air was cleaner, you could hear the trucks changing gear on the motorway. If you stopped and remained still, you could always sense some inner tremble as your body resonated with the deep hum, the pulse of Africa’s biggest city. It took a while for me to notice that there was never any gap between loud noises, never any moment of relative quiet. Even deep in the night, I would wake and the whole house would be vibrating in the heat, the churning air-conditioners resonating between buildings. Far off came the sounds of shunting trains, dogs barking, the continuous rumble of the ring road … And then there was the call to prayer. Speakers were fixed to our balcony, and in our first week of residence I thought I would end up cutting the cables or going mad. After a week, I barely noticed it. I blanked it out of consciousness. When I did happen to pay attention, I liked it. It was a cosy reminder that our world had some structure to it, unlike the humming, murmuring, continuous wall of sound outside. That was what you could never get away from, and things you can never escape feel like an invasion of the soul. They burrow into you. I could feel the noise, the hum, damaging my inner organs. I didn’t know how, exactly; I just knew it would happen, sooner or later. I spent two days at the Red Sea resort of Ain Sukhna. It was… quiet. That’s when I knew that my city days were numbered. Then I discovered the desert: not only quieter, but quietest. I didn’t yet have a car, which dictated the plan for my first real desert excursion. I had read about people using trolleys to tow water through the Australian outback; one group even used an old ice cream cart. I drew up a design on a scrap of paper and for £50 had it built in a backstreet Cairo metal shop. Four moped wheels mounted on a plywood base with sturdy steel axles. No steering. You pulled the trolley with rope and pushed from behind, often at the same time, especially in deep sand. In lieu of steering you just tugged more on one side than the other and it kind of skidded onto a new course. It worked brilliantly. And it was quiet. The trolley did look insane, however. People laughed at it, even in front of me, which was a little disconcerting. Still, emboldened by the fact that I was travelling with my friend the ex-US navy sailor Steve Mann (a long way from the sea), we braved the ridicule and actually made a pretty decent journey — more than 150km into the Sahara. We piled the trolley high with 72 litres of water, which was enough for about a week. We drank it all in five and half days. It’s thirsty work hauling a trolley through the desert, even in winter when it was frosty at night and under 30°C (86°F) by day. I learnt a lot on that first trip. I learnt that the silence was there, but that you had to pay for it in heat and discomfort. I learnt, too, that humans are track-following creatures. It’s our default setting. You’re crossing this lovely expanse of nothingness, pure unadulterated sand as far as the eye can see. Then you notice some tracks. And you start to follow them. If you see tracks and you don’t know which way to go, you follow the tracks. There is little logic in it but everyone is the same, until they deliberately break the habit. You might extend this rule to everyday life. Most of us are in such a deep track we dignify it with a new name: a rut. And wasn’t I following the same rut as everyone else, sucking up the noise for the supposed benefits of living in a big bad city? Later, I bought a car: a 16-year-old, short-wheelbase Toyota Landcruiser, petrol-driven with strengthened axles. It was old and rattly, but this car was the business for getting out into the desert. And because it was so noisy, the silence was even more intense when it stopped. The leaf suspension had been upgraded so that I could carry more than a ton of fuel and water in the back. I had a removable 600 litre tank that fitted inside the car, and 10 jerry cans, old ones left over from the 1967 war but still perfectly sound. Camels are always snuffling and spitting and (though less often than mules) farting. If you’re downwind of them in camp, it is never completely quiet I loved taking visitors out for their first time in the Sahara. There was a huge drop-off you hit after half an hour of driving over dead flat gravel. The ground just fell away down a series of escarpments, revealing the entire Fayyum depression and Lake Qarun in the distance. I used to park right on the edge of the cliff and people would get out and hear the silence, feel their ears emptying as they looked out at the incredible view. It would leave them speechless, reverent. But cars were too easy. The real way to see the desert was the Bedouin way: camels. I made several journeys with my friend Richard Mohun and a group of Bedouin from Dakhla Oasis in southern Egypt. Camels are always snuffling and spitting and (though less often than mules) farting. If you’re downwind of them in camp, it is never completely quiet. But you can always find silence when you wander off looking for stone tools or petroglyphs. The camels were a good way to get to the silent places. Richard taught business courses in the UK. He said we should start a company that would take us into the desert regularly. We considered exporting Bedouin clothes or wood-burning stoves, or offering mountain bike tours. In the end we settled on selling silence. We employed three Bedouin and nine camels and took western tourists — from the UK, Sweden and Germany — far out into the desert. We set a price: £1,550 ($2,400) for 14 days. There was some detailed reasoning behind this figure but we still only just managed to break even. And for those who could pay the price of silence in hard work and preparation rather than money, there was the alternative of going it alone, and I always encouraged such efforts. One little earner we happened upon was to sell silence to the California-based computer company Oracle. They sent executives — typically in groups of 10 or 12 — to experience a mind-changing desert sojourn. We took them on a long hike and set up a Bedouin camp in the dunes, but the most significant adventure for many of them was when they were told to go off and find a spot where they could see and hear no one else. They trudged off into the distance in a kind of expanding star formation until each was invisible to the others. They then sat and listened to the nothingness around them. Of course, not everyone was affected in the same way. Some were scared. Others couldn’t get enough of it. One woman claimed she was developing an allergy to sand. A man told us his haemorrhoids were playing up and apologised for not being as appreciative as the others. Our biggest journey into the heart of silence was in 2010, following the route of the German explorer Friedrich Gerhard Rohlfs, who in 1874 covered the 900km (560 miles) northwest from Dakhla Oasis to Siwa Oasis, a lonely spot known as the site of an ancient oracle consulted by Alexander the Great. We were six Europeans, four Bedouins, and nine camels. Rohlfs took 20 camels and lost half. We almost lost one but it revived miraculously. Camels will attack dune after dune but if you aren’t careful they can suddenly get overtired. Then they just refuse to get up, staying put until they die. The route traversed the edge regions of the Great Sand Sea — at more than 114,400km sq (44,170 miles sq), that’s a sand dune roughly the size of England or the state of Mississippi. It took 27 days to cross. In that time we saw no people and only three vehicles in the far distance: probably Libyan smugglers. We didn’t even hear them, only saw the sun glint off their windscreens five kilometres away as they drove on past. Twenty-seven days of silence. Well, as silent as you can get with 10 people, nine camels and a strong breeze blowing from the north. In other words, you only really noticed the silence when you were away from the group and the wind had died down, which was often enough. And of course there was no hum, which I have come to believe is the real killer, the sleep destroyer, the nerve wracker, the gut churner, the chest crusher. As you get older you value silence more. Your nerves get jangled more easily. Loud music becomes less and less attractive. Instead of wanting to rev up, you seek ways to calm down. But I suspect the search for real silence goes deeper than just a desire to relax. It’s no accident that many religious orders have vows of silence. Only in silence can the soul unburden itself and then listen out for subtler signs, information from the unknown inner regions. How much silence does a person need? You can get greedy for it, addicted to it. I know people who spend half their time in the desert and the other half working out how to get back to it. They are running away from life, some say; they are certainly running away from noise. Recent research suggests that long-term exposure to noise doesn’t just damage hearing (and the average decibel level in Cairo is 85, often getting to 95 and higher, which is only slightly quieter than standing next to a jackhammer); it damages your heart. Continuous noise causes chronic stress. Stress hormones become your constant companion, circulating day and night, wearing out your heart. That must be why the first few days in the desert seem so wonderfully rejuvenating. I’ve seen an elderly man — a retired heart surgeon, coincidentally — go from doddering around the camp to springing along the edge of dunes and rocky cliffs. That’s the power of silence. You know you’re cured when you relish the sound of loud pop music again. Crowded clubs hold no fear; the pumping bass seems like a familiar friend, not a message from the Antichrist. You can ‘take it’. Modern life is ‘OK’. You’ve detoxed and the result is that you seem more youthful. Young people haven’t filled themselves up with noise (yet), so they actively seek it out. For those who have had too much, then emptied it out, the glad return to a noisy world is invigorating. How long does the immunity last? About two weeks, if you’re lucky. | Robert Twigger | https://aeon.co//essays/how-the-sound-of-silence-rejuvenates-the-soul | |
Economics | Governments now answer to business, not voters. Mainstream parties grow ever harder to distinguish. Is democracy dead? | Last September, Il Partito Democratico, the Italian Democratic Party, asked me to talk about politics and the internet at its summer school in Cortona. Political summer schools are usually pleasant — Cortona is a medieval Tuscan hill town with excellent restaurants — and unexciting. Academics and public intellectuals give talks organised loosely around a theme; in this case, the challenges of ‘communication and democracy’. Young party activists politely listen to our speeches while they wait to do the real business of politics, between sessions and at the evening meals. This year was different. The Italian Democratic Party, which dominates the country’s left-of-centre politics, knew that it was in trouble. A flamboyant blogger and former comedian named Beppe Grillo had turned his celebrity into an online political force, Il Movimento 5 Stelle (the Five Star Movement), which promised to do well in the national elections. The new party didn’t have any coherent plan beyond sweeping out Old Corruption, but that was enough to bring out the crowds. The Five Star Movement was particularly good at attracting young idealists, the kind of voters who might have been Democrats a decade before. Worries about this threat spilt over into the summer school. The relationship between communication and democracy suddenly had urgent political implications. The Democratic Party had spent two decades suffering under the former prime minister Silvio Berlusconi’s stranglehold on traditional media. Now it found itself challenged on the left too, by internet-fuelled populists who seemed to be sucking attention and energy away from it. When Bersani started talking, he gave a speech that came strikingly close to a counsel of despair The keynote speaker at the summer school, the Democratic Party leader and prospective prime minister Pier Luigi Bersani, was in a particularly awkward position. Matteo Renzi, the ‘reformist’ mayor of Florence, had recently challenged Bersani’s leadership, promising the kind of dynamism that would appeal to younger voters. If Bersani wanted to stay on as party leader, he had to win an open primary. The summer school gave him a chance to speak to the activists in training, and try to show that he was still relevant. I was one of two speakers warming up the crowd for Bersani. The party members and reporters endured us patiently enough as they waited for the real event. However, when Bersani started talking, he gave a speech that came strikingly close to a counsel of despair. He told his audience that representative democracy, European representative democracy in particular, was in crisis. Once, it had offered the world a model for reconciling economy and society. Now it could no longer provide the concrete benefits — jobs, rights, and environmental protection — that people wanted. In Italy, Berlusconi and his allies had systematically delegitimized government and undermined public life. The relationship between politics and society was broken. Bersani knew what he didn’t want — radical political change. Any reforms would have to be rooted in traditional solidarities. But he didn’t know what he did want either, or if he did, he wasn’t able to describe it. His speech was an attack, swathed in the usual billowing abstractions of Italian political rhetoric, on the purported radicalism of both his internal party opponent and the Five Star Movement. He didn’t really have a programme of his own. He could promise his party nothing except hard challenges and uncertain outcomes. Why do social democrats such as Bersani find it so hard to figure out what to do? It isn’t just the Italians who are in trouble. Social democrats in other countries are also in retreat. In France, Francoise Hollande’s government has offered many things: a slight softening of austerity (France’s deficit this year will be somewhat higher than the European Commission would like); occasional outbursts of anti-business rhetoric (usually swiftly contradicted by follow-up statements); higher taxes on the very rich (to be rolled back as soon as possible). What it has not offered is anything approaching a coherent programme for change. Germany’s Social Democrats are suffering, too. The Christian Democrat-led government can get away with austerity measures as long as it convinces voters that it will do a better job of keeping their money safe from the Spaniards, Italians and Greeks. And the Social Democratic Party’s candidate for Chancellor, Peer Steinbrück, is not well placed to object. In 2009 he helped introduce a constitutional measure to limit government spending, hoping that this would make his party look more responsible. He now appears like a weaker, less resolute version of his opponent, Chancellor Angela Merkel, and has 32 per cent job approval. Greece’s mainstream socialist party, Pasok, won only 12.3 per cent of the vote in the election in June last year. Spain’s social democrats are perhaps in even greater disarray than the conservative government. Ireland’s Labour Party, a junior party in the current government, saw its vote collapse from 21 per cent to 4.6 per cent in a by-election in March. Where they are in opposition, European social democrats don’t know what to offer voters. Where they are in power, they don’t know how to use it. Even in the United States, which has never had a social democratic party with national appeal, the Democrats have gradually changed from a party that belonged ambiguously to the left to one that spans the limited gamut between the ever-so-slightly-left-of-centre and the centre-right. It, too, has had enormous difficulty in spelling out a new agenda, because of internal divisions as well as entrenched hostility from the Republican Party. This isn’t what was supposed to happen. In the 1990s and the 2000s, right-wing parties were the enthusiasts of the market, pushing for the deregulation of banks, the privatisation of core state functions and the whittling away of social protections. All of these now look to have been very bad ideas. The economic crisis should really have discredited the right, not the left. So why is it the left that is paralysed? Colin Crouch’s disquieting little book, Post-Democracy (2005), provides one plausible answer. Crouch is a British academic who spent several years teaching at the European University Institute in Florence, where he was my academic supervisor. His book has been well read in the UK, but in continental Europe its impact has been much more remarkable. Though he was not at the Cortona summer school in person, his ideas were omnipresent. Speaker after speaker grappled with the challenge that his book threw down. The fear that he was right, that there was no palatable exit from our situation, hung over the conference like a dusty pall. Crouch sees the history of democracy as an arc. In the beginning, ordinary people were excluded from decision-making. During the 20th century, they became increasingly able to determine their collective fate through the electoral process, building mass parties that could represent their interests in government. Prosperity and the contentment of working people went hand in hand. Business recognised limits to its power and answered to democratically legitimated government. Markets were subordinate to politics, not the other way around. The realm of real democracy — political choices that are responsive to voters’ needs — shrinks ever further At some point shortly after the end of the Second World War, democracy reached its apex in countries such as Britain and the US. According to Crouch, it has been declining ever since. Places such as Italy had more ambiguous histories of rise and decline, while others still, including Spain, Portugal and Greece, began the ascent much later, having only emerged from dictatorship in the 1970s. Nevertheless, all of these countries have reached the downward slope of the arc. The formal structures of democracy remain intact. People still vote. Political parties vie with each other in elections, and circulate in and out of government. Yet these acts of apparent choice have had their meaning hollowed out. The real decisions are taken elsewhere. We have become squatters in the ruins of the great democratic societies of the past. Crouch lays some blame for this at the feet of the usual suspects. As markets globalise, businesses grow more powerful (they can relocate their activities, or threaten to relocate) and governments are weakened. Yet the real lessons of his book are about more particular forms of disconnection. Neo-liberalism, which was supposed to replace grubby politics with efficient, market-based competition, has led not to the triumph of the free market but to the birth of new and horrid chimeras. The traditional firm, based on stable relations between employer, workers and customers, has spun itself out into a complicated and ever-shifting network of supply relationships and contractual forms. The owners remain the same but their relationship to their employees and customers is very different. For one thing, they cannot easily be held to account. As the American labour lawyer Thomas Geoghegan and others have shown, US firms have systematically divested themselves of inconvenient pension obligations to their employees, by farming them out to subsidiaries and spin-offs. Walmart has used hands-off subcontracting relationships to take advantage of unsafe working conditions in the developing world, while actively blocking efforts to improve industry safety standards until 112 garment workers died in a Bangladesh factory fire in November last year. Amazon uses subcontractors to employ warehouse employees in what can be unsafe and miserable working conditions, while minimising damage to its own brand. Instead of clamping down on such abuses, the state has actually tried to ape these more flexible and apparently more efficient arrangements, either by putting many of its core activities out to private tender through complex contracting arrangements or by requiring its internal units to behave as if they were competing firms. As one looks from business to state and from state to business again, it is increasingly difficult to say which is which. The result is a complex web of relationships that are subject neither to market discipline nor democratic control. Businesses become entangled with the state as both customer and as regulator. States grow increasingly reliant on business, to the point where they no longer know what to do without its advice. Responsibility and accountability evanesce into an endlessly proliferating maze of contracts and subcontracts. As Crouch describes it, government is no more responsible for the delivery of services than Nike is for making the shoes that it brands. The realm of real democracy — political choices that are responsive to voters’ needs — shrinks ever further. Politicians, meanwhile, have floated away, drifting beyond the reach of the parties that nominally chose them and the voters who elected them. They simply don’t need us as much as they used to. These days, it is far easier to ask business for money and expertise in exchange for political favours than to figure out the needs of a voting public that is increasingly fragmented and difficult to understand anyway. Both the traditional right, which always had strong connections to business, and the new left, which has woven new ties in a hurry, now rely on the private sector more than on voters or party activists. As left and right grow ever more disconnected from the public and ever closer to one another, elections become exercises in branding rather than substantive choice. Crouch was writing Post-Democracy 10 years ago, when most people thought that things were going quite well. As long as the economy kept delivering jobs and growth, voters didn’t seem to mind about the hollowing out of democracy. Left-of-centre parties weren’t worried either: they responded to the new incentives by trying to articulate a ‘Third Way’ of market-like initiatives that could deliver broad social benefits. Crouch’s lessons have only really come home in the wake of the economic crisis. The problem that the centre-left now faces is not that it wants to make difficult or unpopular choices. It is that no real choices remain. It is lost in the maze, able neither to reach out to its traditional bases of support (which are largely dying or alienated from it anyway) nor to propose any grand new initiatives, the state no longer having the tools to implement them. When the important decisions are all made outside of democratic politics, the centre-left can only keep going through the ritualistic motions of democracy, all the while praying for intercession. Most left-wing parties face some version of these dilemmas. Cronyism is less a problem than an institution in the US, where decision-makers relentlessly circulate between Wall Street, K Street, and the Senate and Congress. Yet Europe has some particular bugbears of its own. Even if national political systems were by some miracle to regain their old responsiveness, the power of decision has moved to the European Union, which is dominated by a toxic combination of economic realpolitik and bureaucratic self-interest. Rich northern states are unwilling to help their southern neighbours more than is absolutely necessary; instead they press for greater austerity. The European Central Bank, which was deliberately designed to be free of democratic oversight, is becoming ever more important, and ever more political. Social democrats once looked to the EU as a bulwark against globalisation — perhaps even a model for how the international economy might be subjected to democratic control. Instead, it is turning out to be a vector of corrosion, demanding that weaker member states implement drastic economic reforms without even a pretence of consultation. Let’s return to Italy, the laboratory of post-democracy’s most grotesque manifestations. Forza Italia, Silvio Berlusconi’s elaborate simulacrum of a political party, is a perfect exemplar of Crouch’s thesis: a thin shell of branding and mass mobilisation, with a dense core of business and political elites floating free in the vacuum within. After the Cortona summer school, Bersani won his fight with Renzi in November last year and led his party into the general election. His coalition lost 3.5 million votes but still won the lower house in February, because the Italian electoral system gives a massive bonus to the biggest winner. It fell far short of a majority in the upper house and is doing its hapless best to form a government. Grillo’s Five Star Movement, on the other hand, did far better than anyone expected, winning a quarter of the votes. Grillo has made it clear that his party will not support the Democratic Party. Renzi has tried to advance himself again as a compromise leader who might be more acceptable to Grillo, so far without success. In all likelihood there will be a second general election in a few months. ‘We die if a movement becomes a party. Our problem is to remain a movement in parliament, which is a structure for parties. We have to keep a foot outside’ The Italian Democratic Party is caught on one tine of the post-democratic dilemma. It is trying to work within the system as it is, in the implausible hope that it can produce real change within a framework that almost seems designed to prevent such a thing. As the party has courted Grillo, it has started making noises about refusing to accept austerity politics and introducing major institutional reforms. It is unclear whether senior Democratic figures believe their new rhetoric; certainly no one else does. If the party does somehow come to power, the most it will do is tinker with the system. The Five Star Movement has impaled itself on the other tine, as have the Indignados in Spain, Occupy in the US and UK, and the tent movement in Israel. All have gained mass support because of the problems of post-democracy. The divide between ordinary people and politicians has grown ever wider, and Italian politicians are often corrupt as well as remote. The Five Star Movement wants to reform Italy’s institutions to make them truly democratic. Yet it, too, is trapped by the system. As Grillo told the Financial Times in October: ‘We die if a movement becomes a party. Our problem is to remain a movement in parliament, which is a structure for parties. We have to keep a foot outside.’ The truth is, if the Five Star Movement wants to get its proposals for radical change through the complex Italian political system, it will need to compromise, just as other parties do. Grillo’s unwillingness even to entertain discussions with other parties that share his agenda is creating fissures within his movement. Grillo is holding out for a more radical transformation, in which Italian politics would be replaced by new forms of internet-based ‘collective intelligence’, allowing people to come together to solve problems without ugly partisan bargaining. In order to save democracy, the Five Star Movement would like to leave politics behind. It won’t work. The problems of the Italian left are mirrored in other countries. The British Labour Party finds itself in difficulty, wavering between a Blairite Third Wayism that offers no clear alternative to the present government, and a more full-blooded social democracy that it cannot readily define. The French left has mired itself in scandal and confusion. The Greek left is divided between a social democratic party that is more profoundly compromised than its Italian equivalent and a loose coalition of radicals that wants to do anything and everything except find itself in power and be forced to take decisions. All are embroiled, in different ways, in the perplexities of post-democracy. None has any very good way out. Ever since France’s president François Mitterrand tried to pursue an expansive social democratic agenda in the early 1980s and was brutally punished by international markets, it has been clear that social democracy will require either a partial withdrawal from the international economy, with all the costs that this entails, or a radical transformation of how the international economy works. It is striking that the right is not hampered to nearly the same extent. Many mainstream conservatives are committed to democracy for pragmatic rather than idealistic reasons. They are quite content to see it watered down so long as markets work and social stability is maintained. Those on the further reaches of the right, such as Greece’s Golden Dawn, find it much easier than the Five Star Movement or Syriza, the Greek radical-left coalition, to think about alternatives. After all, they aren’t particularly interested in reforming moribund democratic institutions to make them better and more responsive; they just want to replace them with some version of militaristic fascism. Even if these factions are unlikely to succeed, they can still pull their countries in less democratic directions, by excluding weaker groups from political protection. The next 10 years are unlikely to be comfortable for immigrants in southern Europe. Post-democracy is strangling the old parties of the left. They have run out of options. Perhaps all that traditional social democracy can do, to adapt a grim joke made by Crouch in a different context, is to serve as a pall-bearer at its own funeral. In contrast, a new group of actors — the Five Star Movement and other confederations of the angry, young and dispossessed — have seized a chance to win mass support. The problem is, they seem unable to turn mass frustration into the power to change things, to create a path for escape. Perhaps, over time, they will figure out how to engage with the mundane task of slow drilling through hard boards that is everyday politics. Perhaps, too, the systems of unrule governing the world economy, gravely weakened as they are, will fail and collapse of their own accord, opening the space for a new and very different dispensation. Great changes seem unlikely until they happen; only in retrospect do they look inevitable. Yet if some reversal in the order of things is waiting to unfold, it is not apparent to us now. Post-democracy has trapped the left between two worlds, one dead, the other powerless to be born. We may be here for some time. | Henry Farrell | https://aeon.co//essays/the-left-is-now-too-weak-for-democracy-to-survive | |
Love and friendship | I love her and it’s a secret. I love her so much it kills me, and you bet I’d sooner die than tell her | My mom runs fast for a 65-year-old. She’s small — 5 ft even — and clocks in at just over 100 lbs. Her compact frame slays in the juniors section of American department stores. I see her sprinting toward me as I stand on the corner of Austin’s busiest intersection, on its busiest fortnight — the two weeks it plays host to South by Southwest, the annual multimedia conference. It’s just after 11pm and traffic is an absolute shitshow. My mom’s always been sporty but since she stopped dyeing her hair she looks her age. As she gets closer, I worry that her brittle avian skeleton is going to crumple atop the hood of a swerving SUV. Being picked up by my parents is an experience I thought I’d grown out of entirely. After all, I am 33 years old, live in New York and am here on business. But they live just an hour outside of town, and I pulled the trigger on hotels late enough that I’m staying with them. They’ve been stuck in traffic for two hours coming to get me. I was on the phone with my dad, both of us barking over the imperious GPS voice — him in a road rage and me in a full-body eye-roll — when my mom bolted from the car to run ahead, figuring I’d be easier to peg on foot. I’m watching her beam and wave big, while running hard and yelling my full name in English, just like that: first name; last name. My parents both do this as though it’s for my benefit. Like, calling a child by their full government name is super-casual. Like, it’s not a dead giveaway as the weirdest, most ESL affectation in the world. I’m waiting with a 24-year-old colleague that I hired straight from college who idolises me and I’m worried that my mom will hurt herself and that people will see. The whole thing infuriates me. I refuse to eat the snacks that she’s tin-foiled from home. I love my mother a not-normal amount. It’s all twisty because she tried to kill me when I was young. Just kidding. My mom is an excellent mom. She knows I am irascible, prickly and antisocial. She knows that most human interaction makes me tired and that I either scare people away with precise invectives or trot out the fakest, nicest skinjob of myself because it requires zero effort. She nails me on all of it, asking one billion follow-up questions until I get behind my eyeballs and engage. She forces me to call distant relatives, dialling the phone and pressing it into my cheek while my eyes get hot and watery. She pulls rank all the time and once judo-flipped me onto my back in a grocery store to remind me where things stood. She is my favorite and it makes me crazy. You can tell that she was popular in school, but I am a fundamentally more popular person. I care more and I’m great at rules. I’ve known it since the first grade. If I were an actress and had to think of something sad to make me cry in a scene, I would think about this moment When I was small I thought I was just cooler than my mom because of how foreign she is. She’s really foreign. You’d think it would kill her to get store-bought snacks, she’s that foreign. She grew up in a Korea filled with Koreans, married a Korean and then moved to Hong Kong in her mid-30s. I was 11 months and my brother was two years old. This was back when Hong Kong was a British Crown colony, which meant we were living in Asia with heaps of Australians and bronzed Europeans who dated Filipino women. It was all very James Clavell and linen shirts. In any case, I speak four languages and am a ruthless assimilation ninja. I will renounce all kin in the name of camouflage because everything is a contest and I am a disgusting sell-out. It’s the twin moon to my being popular in any context provided I put my mind to it. I’m sure there’s a field of corn withering somewhere in my soul that fuels this despicable talent, but everyone’s got to die of cancer some time, right? My mother, on the other hand, speaks English poorly with a screwy, poncy Korean British accent, as if she learned it from watching one 1960s Merchant Ivory movie on repeat. She’s also ridiculously formal, deeply private and not a joiner. She transitions poorly. The move to Hong Kong with two wee kids and an absentee partner was rough. My father had elected to set up a shipping company. He was out of the country for eight months of the year, and sometime around my tenth birthday I discovered that he spoke conversational Russian for reasons that remain murky. All this is to say that he wasn’t around a lot. When I was five, I compound-fractured my arm, pulverising my elbow. I was on a play date at my mom’s friend’s house and so naturally blamed my mother. I actually remember lying on the floor, howling accusations of neglect at her while she frantically summoned an ambulance that arrived with a squad car and a firetruck in tow. I was already having a tough time adjusting at school, and it looked like I would miss weeks of class. I found speaking in English disorienting because we spoke only Korean at home. I even preferred Cantonese to English since we’d attended a local Chinese school for a week while waiting on test scores to admit us into a British private school. Forced to wear a massive cast during my fifth month of British school, I began referring to myself in the third person — my English name — announcing, daily, that ‘Mary would not be going to school.’ School was awful. I had to leave during the middle of the day for physical therapy that involved swimming and returning to class with inexplicably wet hair. Lunch sucked. My mom would pack the dumbest garbage. She once smeared bits of raw garlic left over from making kimchi onto white sandwich bread, thinking that’s how the garlic bread advertised at Pizza Hut was born. I waited until she got off work that night and yelled at her with rank breath. I’d eaten most of the seemingly innocent square, elated that a sandwich had turned up at all in a lunch box that usually contained punishment food that sometimes had eyes. The stress of navigating school as a teeny-tiny uncomfortable person with an enormous gimp wing was taking a toll. One lunch, I was dragging myself around the playground when I saw my mom standing by the fence, waving big and calling my name. I wanted so badly to ignore her. She was supposed to be at work and I didn’t have physical therapy that day so I was immediately suspicious. As confusing as her presence was, my curiosity did not outweigh my desire to be left alone. Especially by her. I began to back away so she started shouting loud enough to be heard over the playground din. I shuffled towards her with every intention to roundhouse-bludgeon her with my plastered arm. She held out a paper box. It was a McDonald’s happy meal: a cheeseburger one, which was my favorite. The offering was so out of character that I considered it a bribe. I wondered if my parents were getting a divorce since that was huge at my school at the time. I asked her what was going on. She mentioned something about how she wanted me to have a lunch that I liked. I then did what any normal kid would do and yelled and yelled about how embarrassing it was to have her at school with me during lunch of all times. She presented me with a sack of cheeseburgers that I could give out to my friends. I refused the damp bag and screeched about how it was so cheap that she didn’t spring for bright red boxes with toys for them as well. I made her take the burgers back with her. If I were an actress and had to think of something sad to make me cry in a scene, I would think about this moment. This and the time I was 13 when I kicked my mom across a room and ran away for two days because she tried to ground me — for breaking curfew after my friend Jacinta stole money from her dying grandmother so we could rent out a nightclub and write the names of those blackballed on the sign outside. For the record: I don’t know why people have kids. The summer before I turned 14, my mom, brother and I moved to Texas. We’d always known that some day before Hong Kong returned to Chinese rule, we’d join my mom’s side of the family in the US. While our Green Cards were being approved, my father bought a house in suburban San Antonio despite our extended family living 1,400 miles away in LA. After 13 years of sardine life at high-rise altitudes, he liked the idea of spreading out. The prospect of opening all our dresser drawers without hitting bed frames or doors sold him on Texas-sized everything. My father split his time between running a business in Asia and visiting us. When I arrived in Texas, it was mid-June and 104 degrees in the shade. I was fresh off a forced breakup with my Hong Kong boyfriend, a dishy 17-year-old rugby player. Between the heat and the heartbreak, the move was not my favorite. Trapped in the suburbs, I began to notice that the mother I’d largely ignored in Hong Kong was interesting — so long as she was talking about me. My mom was the only one of us with a driver’s licence. Some time in mid-July, I started speaking to her again on car rides and we became friends. She told me stories about how when I was two I would dangle out of my parent’s window on the 18th floor to play in the tiled flower box. She told me about the time I wandered off with another family in a park, which I totally remember because they had empirically superior toys. She said that when I was four, I stole hundreds of dollars from her and bribed my bus driver to drop me off last and to make a pitstop at the deli so I could buy candy on my way home. I’d stuffed the change in my shallow pinafore pockets and when my mother frantically berated me for stealing the money and trying to get myself kidnapped, I told her I loved money more than I loved her. I found all of this fascinating. These days I don’t love money how I used to. My mom though, I’m crazy about This is going to sound absurd but my first year in Texas was the year that I first cared about being smart. I’d always prided myself on being popular. My older brother was the one with good grades and I was the one who dated burnouts from the year above him. There was something in the complete reboot of Texas, the comparative stillness of heavy skies and quiet nights that made me read a lot. I read a new book every other day and aced exams. Even as a sophomore, I easily slid in with the popular seniors. But by the time they graduated, I couldn’t be bothered to imprint on the next guard. I kept to myself and took a slew of Advanced Placement to college classes. School was easy for me but those years were tough on my mom. In Hong Kong she’d had tons of friends. She was active at church and there was a sizable Korean community. In Texas she didn’t have anyone but me and my brother. Every morning when the bus would come to pick us up while it was still dark out, I could see her slight backlit frame outlined in our blinds as she watched us drive away. A senior on the bus once asked if my mom knew that we could all totally see her. I told that kid to go fuck himself and to quit looking at my mom. To this day, I still can’t watch her watch us leave. It’s a blessing that life is riddled with diversions. I work a lot. I’ve never had the weeks between Christmas and New Year’s off, but these days I don’t love money how I used to. My mom though, I’m crazy about. I think about her all the time and can’t stand it. When she rings during a meal I get indigestion if I don’t call her back immediately. There’s a roiling shame spiral wherein I become resentful that she called at all and punish us both by prolonging the wait. I have no idea when my perception of my mother became the calculated crush of my life but it has. I don’t go home for birthdays or holidays, and on the occasions I do visit, I express my affection in strange ways. I wait for her to fall asleep and peer over her body and imagine what it’d be like if she died. I just stand there, hot silent tears coursing down my face. We’re not a demonstrative family, and such maudlin, psycho behavior is fair grounds for riotous derision. I love my mom and it’s a secret. I love her so much it kills me, and you bet I’d sooner die than tell her. I kinda want her to know though. Maybe someone could tell her for me. Someone who isn’t my dad. Because that would be weird. | Mary H K Choi | https://aeon.co//essays/i-love-my-mom-a-not-normal-amount-and-it-makes-me-crazy | |
Astronomy | Dark matter is the commonest, most elusive stuff there is. Can we grasp this great unsolved problem in physics? | I’m sitting at my desk at the University of Washington trying to conserve energy. It isn’t me who’s losing it; it’s my computer simulations. Actually, colleagues down the hall might say I was losing it as well. When I tell people I’m working on speculative theories about dark matter, they start to speculate about me. I don’t think everyone who works in the building even believes in it. In presentations, I point out how many cosmological puzzles it helps to solve. Occam’s Razor is my silver bullet: the fact that just one posit can explain so much. Then I talk about the things that standard dark matter doesn’t fix. There don’t seem to be enough satellite galaxies around our Milky Way. The inner shapes of small galaxies are inconsistent. I invoke Occam’s Razor again and argue that you can resolve these issues by adding a weak self-interaction to standard dark matter, a feeble scattering pattern when its particles collide. Then someone will ask me if I really believe in all this stuff. Tough question. The world we see is an illusion, albeit a highly persistent one. We have gradually got used to the idea that nature’s true reality is one of uncertain quantum fields; that what we see is not necessarily what is. Dark matter is a profound extension of this concept. It appears that the majority of matter in the universe has been hidden from us. That puts physicists and the general public alike in an uneasy place. Physicists worry that they can’t point to an unequivocal confirmed prediction or a positive detection of the stuff itself. The wider audience finds it hard to accept something that is necessarily so shadowy and elusive. The situation, in fact, bears an ominous resemblance to the aether controversy of more than a century ago. In the late-1800s, scientists were puzzled at how electromagnetic waves (for instance, light) could pass through vacuums. Just as the most familiar sort of waves are constrained to water — it’s the water that does the waving — it seemed obvious that there had to be some medium in which electromagnetic waves were ripples. Hence the notion of ‘aether’, an imperceptible field that was thought to permeate all of space. The American scientists Albert Michelson and Edward Morley carried out the most famous experiment to probe the existence of aether in 1887. If light needed a medium to propagate, they reasoned, then the Earth ought to be moving through this same medium. They set up an ingenious apparatus to test the idea: a rigid optics table floating on a cushioning vat of liquid mercury such that the table could rotate in any direction. The plan was to compare the wavelengths of light beams travelling in different relative directions, as the apparatus rotated or as the Earth swung around the sun. As our planet travelled along its orbit in an opposite direction to the background aether, light beams should be impeded, compressing their wavelength. Six months later, the direction of the impedance should reverse and the wavelength would expand. But to the surprise of many, the wavelengths were the same no matter what direction the beams travelled in. There was no sign of the expected medium. Aether appeared to be a mistake. This didn’t rule out its existence in every physicist’s opinion. Disagreement about the question rumbled on until at least some of the aether proponents died. Morley himself didn’t believe his own results. Only with perfect hindsight is the Michelson-Morley experiment seen as evidence for the absence of aether and, as it turned out, confirmation of Albert Einstein’s more radical theory of relativity. Dark matter, dark energy, dark money, dark markets, dark biomass, dark lexicon, dark genome: scientists seem to add dark to any influential phenomenon that is poorly understood and somehow obscured from direct perception. The darkness, in other words, is metaphorical. At first, however, it was intended quite literally. In the 1930s, the Swiss astronomer Fritz Zwicky observed a cluster of galaxies, all gravitationally bound to each other and orbiting one another much too fast. Only the gravitational pull of a very large, unseen mass seemed capable of explaining why they did not simply spin apart. Zwicky postulated the presence of some kind of ‘dark’ matter in the most casual sense possible: he just thought there was something he couldn’t see. But astronomers have continued to find the signature of unseen mass throughout the cosmos. For example, the stars of galaxies also rotate too fast. In fact, it looks as if dark matter is the commonest form of matter in our universe. It is also the most elusive. It does not interact strongly with itself or with the regular matter found in stars, planets or us. Its presence is inferred purely through its gravitational effects, and gravity, vexingly, is the weakest of the fundamental forces. But gravity is the only significant long-range force, which is why dark matter dominates the universe’s architecture at the largest scales. Astrophysicists who try to trifle with the fundamentals of dark matter tend to find themselves cut off from the mainstream In the past half-century, we have developed a standard model of cosmology that describes our observed universe quite well. In the beginning, a hot Big Bang caused a rapid expansion of space and sowed the seeds for fluctuations in the density of matter throughout the universe. Over the next 13.7 billion years, those density patterns were scaled up thanks to the relentless force of gravity, ultimately forming the cosmic scaffolding of dark matter whose gravitational pull suspends the luminous galaxies we can see. This standard model of cosmology is supported by a lot of data, including the pervasive radiation field of the universe, the distribution of galaxies in the sky, and colliding clusters of galaxies. These robust observations combine expertise and independent analysis from many fields of astronomy. All are in strong agreement with a cosmological model that includes dark matter. Astrophysicists who try to trifle with the fundamentals of dark matter tend to find themselves cut off from the mainstream. It isn’t that anybody thinks it makes for an especially beautiful theory; it’s just that no other consistent, predictively successful alternative exists. But none of this explains what dark matter actually is. That really is a great, unsolved problem in physics. So the hunt is on. Particle accelerators sift through data, detectors wait patiently underground, and telescopes strain upwards. The current generation of experiments has already placed strong constraints on viable theories. Optimistically, the nature of dark matter could be understood within a few decades. Pessimistically, it might never be understood. We are in an era of discovery. A body of well-confirmed theory governs the assortment of fundamental particles that we have already observed. The same theory allows the existence of other, hitherto undetected particles. A few decades ago, theorists realised that a so-called Weakly Interacting Massive Particle (WIMP) might exist. This generic particle would have all the right characteristics to be dark matter, and it would be able to hide right under our noses. If dark matter is indeed a WIMP, it would interact so feebly with regular matter that we would have been able to detect it only with the generation of dark matter experiments that are just now coming on stream. The most promising might be the Large Underground Xenon (LUX) experiment in South Dakota, the biggest dark matter detector in the world. The facility opened in a former gold mine this February and is receptive to the most elusive of subatomic particles. And yet, despite LUX’s exquisite sensitivity, the hunt for dark matter itself has been something of a waiting game. So far, the only particles to turn up in the detector’s trap are bits of cosmic noise: nothing more than a nuisance. The past success of standard paradigms in theoretical physics leads us to hunt for a single generic dark matter particle — the dark matter. Arguably, though, we have little justification for supposing that there is anything to be found at all; as the English physicist John D Barrow said in 1994: ‘There is no reason that the universe should be designed for our convenience.’ With that caveat in mind, it appears the possibilities are as follows. Either dark matter exists or it doesn’t. If it exists, then either we can detect it or we can’t. If it doesn’t exist, either we can show that it doesn’t exist or we can’t. The observations that led astronomers to posit dark matter in the first place seem too robust to dismiss, so the most common argument for non-existence is to say there must be something wrong with our understanding of gravity – that it must not behave as Einstein predicted. That would be a drastic change in our understanding of physics, so not many people want to go there. On the other hand, if dark matter exists and we can’t detect it, that would put us in a very inconvenient position indeed. But we are living through a golden age of cosmology. In the past two decades, we have discovered so much: we have measured variations in the relic radiation of the Big Bang, learnt that the universe’s expansion is accelerating, glimpsed black holes and spotted the brightest explosions ever in the universe. In the next decades, we are likely to observe the first stars in the universe, map nearly the entire distribution of matter, and hear the cataclysmic merging of black holes through gravitational waves. Even among these riches, dark matter offers a uniquely inviting prospect, sitting at a confluence of new observations, theory, technology and (we hope) new funding. Physicists could take non-detection as a hint to give up, but there is always the teasing possibility that we just need a better experiment The various proposals to get its measure tend to fall into one of three categories: artificial creation (in a particle accelerator), indirect detection, and direct detection. The last, in which researchers attempt to catch WIMPs in the wild, is where the excitement is. The underground LUX detector is one of the first in a new generation of ultra-sensitive experiments. It counts on the WIMP interacting with the nucleus of a regular atom. These experiments generally consist of a very pure detector target, such as pristine elemental Germanium or Xenon, cooled to extremely low temperatures and shielded from outside particles. The problem is that stray particles tend to sneak in anyway. Interloper interactions are carefully monitored. Noise reduction, shielding and careful statistics are the only way to confirm real dark-matter interaction events from false alarms. Theorists have considered a lot of possibilities for how the real thing might work with the standard WIMP. Actually, the first generation of experiments has already ruled out the so-called z-boson scattering interaction. What is left is Higgs boson-mediated scattering, which would involve the same particle that the Large Hadron Collider discovered in Geneva in November last year. That implies a very weak interaction, but it would be perfectly matched to the current sensitivity threshold of the new generation of experiments. Then again, science is less about saying what is than what is not, and non-detections have placed relatively interesting constraints on dark matter. They have also, in a development that is strikingly reminiscent of the aether controversy, thrown out some anomalies that need to be cleared up. Using a different detector target to LUX, the Italian DAMA (short for ‘DArk MAtter’) experiment claims to have found an annual modulation of their dark matter signal. Detractors dispute whether they really have any signal at all. Just like with the aether, we expected to see this kind of yearly variation, as the Earth orbits the Sun, sometimes moving with the larger galactic rotation and sometimes against it. The DAMA collaboration measured such an annual modulation. Other competing projects (XENON, CDMS, Edelweiss and ZEPLIN, for example) didn’t, but these experiments cannot be compared directly, so we should probably reserve judgment. Nature can be cruel. Physicists could take non-detection as a hint to give up, but there is always the teasing possibility that we just need a better experiment. Or perhaps dark matter will reveal itself to be almost as complex as regular matter. Previous experiments imposed quite strict limitations on just how much complexity we can expect — there’s no prospect of dark-matter people, or even dark-matter chemistry, really — but it could still come in multiple varieties. We might find a kind of particle that explains only a fraction of the expected total mass of dark matter. In a sense, this has already occurred. Neutrinos are elusive but widespread (60 billion of them pass through an area the size of your pinky every second). They hardly ever interact with regular matter, and until 1998 we thought they were entirely massless. In fact, neutrinos make up a tiny fraction of the mass budget of the universe, and they do act like an odd kind of dark matter. They aren’t ‘the’ dark matter, but perhaps there is no single type of dark matter to find. To say that we are in an era of discovery is really just to say that we are in an era of intense interest. Physicists say we would have achieved something if we determine that dark matter is not a WIMP. Would that not be a discovery? At the same time, the field is burgeoning with ideas and rival theories. Some are exploring the idea that dark matter has interactions, but we will never be privy to them. In this scenario, dark matter would have an interaction at the smallest of scales which would leave standard cosmology unchanged. It might even have an exotic universe of its own: a dark sector. This possibility is at once terrifying and entrancing to physicists. We could posit an intricate dark matter realm that will always escape our scrutiny, save for its interaction with our own world through gravity. The dark sector would be akin to a parallel universe. It is rather easy to tinker with the basic idea of dark matter when you make all of your modifications very feeble. And so this is what all dark matter theorists are doing. I have run with the idea that dark matter might have self-interactions and worked that into supercomputer simulations of galaxies. On the largest scales, where cosmology has made firm predictions, this modification does nothing, but on small scales, where the theory of dark matter shows signs of faltering, it helps with several issues. The simulations are pretty to look at and they make acceptable predictions. There are too many free parameters, though — what scientists call fine-tuning — such that the results can seem tailored to fit the observations. That’s why I reserve judgement, and you would be well advised to do the same. We will probably never know for certain whether dark matter has self-interactions. At best, we might put an upper limit on how strong such interactions could be. So, when people ask me if I think self-interacting dark matter is the correct theory, I say no. I am constraining what is possible, not asserting what is. But this is kind of disappointing, isn’t it? Surely cosmology should hold some deep truth that we can hope to grasp. One day, perhaps, LUX or one of its competitors might discover just what they are looking for. Or maybe on some unassuming supercomputer, I will uncover a hidden truth about dark matter. Regardless, such a discovery will feel removed from us, mediated as it will be through several layers of ghosts in machines. The dark matter universe is part of our universe, but it will never feel like our universe. Nature plays an epistemological trick on us all. The things we observe each have one kind of existence, but the things we cannot observe could have limitless kinds of existence. A good theory should be just complex enough. Dark matter is the simplest solution to a complicated problem, not a complicated solution to simple problem. Yet there is no guarantee that it will ever be illuminated. And whether or not astrophysicists find it in a conceptual sense, we will never grasp it in our hands. It will remain out of touch. To live in a universe that is largely inaccessible is to live in a realm of endless possibilities, for better or worse. | Alexander B Fry | https://aeon.co//essays/will-cosmologists-ever-illuminate-us-about-dark-matter | |
Travel | We were a group of secular Buddhists visiting the cave temples of India. What would we learn on the journey? | A rooster crowed at 4am in downtown Mumbai. Where could it be coming from in this network of tightly woven buildings and streets and shops and people and cars? In the morning we saw the proud specimen patrolling in front of our hotel. This was the start of our journey where ancient and modern, nature and traffic, would meet again and again. We were a group of 30 questioning, secular practitioners on a different kind of Buddhist pilgrimage, to a series of ancient rock-cut temples and other early Buddhist sites. We wished to re-inspirit these places, which were now mainly archaeological sites: we would walk in the footsteps of those who had lived and practised the way of the Buddha from the first century BC to the 10th century AD. We planned to meditate, listen to talks, and discuss the Buddha’s teaching each day. Towards the end, we would visit two famous caves, Ajanta and Ellora, but all the other cave temples we planned to see were relatively unknown and rarely visited by Western Buddhists. We began in Maharashtra, in western India: the home both of ancient Buddhist caves and also a very modern, distinctive form of Buddhism. As I tried to find a path among the throngs of people in the streets of Mumbai, I was struck by all things ‘ambedkarite’ — posters, T-shirts, scarves, amulets — and couldn’t resist buying a plastic kaleidoscopic portrait of Ambedkar himself. The Indian jurist and political reformer Dr B R Ambedkar was born in 1891 to an ‘untouchable’ or Dalit family who served the British army. He was the first Dalit to graduate from university and, as the first law minister of independent India, one of the main architects of the Indian constitution. Two months before he died in 1956, he converted to Buddhism because he believed it was the way for Dalits to escape the Hindu caste system. He urged mass conversions and himself held a conversion of 400,000 people in the state capital of Nagpur. Throughout our journey, we would meet families of Buddhist converts, former ‘untouchables’, and the first thing they would mention was Ambedkar, their hero, who helped them improve their condition and develop their confidence. The spirit of early Buddhism was alive here in more ways than one. After the Buddha’s death c.404–484 BC, Buddhism was mainly situated in northern India. By 232 BC and the death of King Ashoka — one of the religion’s foremost supporters — Buddhism had spread all over India. Over the centuries, however, Buddhism declined because of its strong emphasis on monasticism on the one hand and the strengthening of Hinduism and the caste system on the other. Repeated invasion by central Asian Muslim nomads from the 11th century onwards was the final straw. Buddhism more or less disappeared in India or was amalgamated with Hinduism. Many wooden and brick monasteries were destroyed but the rock-cut temples of Maharashtra endured, albeit emptied out. Our first visit was to the 109 caves of Kanheri in the deeply forested Sanjay Gandhi National Park less than an hour from Mumbai. The first thing we were told was to keep our packed lunches safe from marauding monkeys. We even had to be protected from them by a guard as we meditated in a 2,000-year-old hall. Kanheri became dear to my heart, a haven of caves with few icons, its simplicity deeply evocative of early monastic life. There were many small carved stone huts: monastic cells with one or two stone beds and light coming from small windows punched into the walls. Outside each cell was a small cistern for water, which was fed by an intricate system of underground run-off. An elegant stone bench to the side completed the set-up — for the view, as a place to rest in the heat, for interviews, or for all three: who knows? It was easy to imagine myself as one of the monks or nuns meditating, chanting or studying, or as a layperson visiting them for inspiration and encouragement. Here one did not feel a convert to a strange exotic religion but a practitioner of a path, a culture of awakening. Kanheri was one of the oldest sites we would visit. As we went on to many different types of caves — large halls, small dwellings, grand chapels — I was struck by the difference in style between the early and the late carvings. The simple geometrical shapes of the first-century BC stupas evoked modern sculptures, while the later elaborate statues of the Buddha and different Bodhisattvas was a reminder that we were in prolific, abundant India, where Buddhism and Hinduism had overlapped for hundreds of years and influenced one another. Many of these later temples were in fact built and decorated under the patronage of local Hindu rulers. Some ancient Buddhist cave temples had been adopted by villagers for their Hindu worship. The Karla cave, two hours from Mumbai, was now associated with the goddess Ekveera, who is worshipped by fishermen. While climbing up from the Indrayani valley, one passes a gauntlet of shops selling flower garlands, coconuts and other such offerings to her. The goddess herself resides in her own small temple outside the main cave of the complex, which is fortunately still intact. We do not see ancient Buddhist ideas and practices as sacred and unquestionable. At the same time, we could be deeply touched by what was sublime in these cave temples The Karla caves were carved between 60 BC and the fourth century AD. They were created as the offerings of princes, merchants, monks, nuns, and lay devotees. Some of the pillars contained the ashes of the patrons, held in a special hole carved in the middle. The main cave was the chaitya — a prayer hall containing the most intriguing stupa we would see on this pilgrimage. The hall itself was the biggest rock chaitya in India — 45 metres long, 14 metres wide, and 14 metres high, all carved out of the hillside. It seemed to us that the hall had two main functions, as a gathering place for chanting together but also as a place of circumambulation. All of us spent time walking meditatively in silence either around the stupa itself or behind the pillars. Stupas originated as funerary mounds, and the earliest Buddhist stupas were built to contain relics of the Buddha himself. But over time they were elaborated far beyond those simple forms, into the wats of south-east Asia, the pagodas of Japan and China, and the chortens of Tibet: beautiful and complex architectural forms with many layers of symbolic meaning. In Karla we were brought back to the beginnings of the tradition: the stupa here was formed of two drums surmounted by a dome, on top of which was a cube, on top of which was a seven-stepped inverted pyramid which was pierced by a wooden shaft from above and covered by a wooden canopy. Apart from the wooden pole and top, it had all been carved out of the rock where it stood. These early stupas were completely unadorned, pre-iconic. To see such spare beauty and exquisite geometric forms, representing the Buddha’s practice and attainment, was incredibly moving. We were reminded that ‘less is more’. One of our promised treats was the Bedsa Caves, which were built between the first century BC and the second century AD. We climbed a hundred steps in silence to be greeted by a jewel of a place with an amazing panorama of the surrounding countryside and comfortable living quarters where we could sit and spread out. We had come for a day of silence and practice. The theme of our meditation was listening, listening mindfully without commenting, neither grasping nor rejecting, observing the arising and passing away of various sounds. This creative listening meditation was our mainstay as we travelled and communicated with each other over two weeks. We did not have much time to spend in each site as we had to travel back and forth. But that hour of daily practice and reflections on dharma (or cosmic law) had a profound effect on all of us. Being in situ, connecting to previous practitioners within the simplicity and beauty of these sites, had an organic impact on us, and also on the careful, attentive way in which we related to each other. Retreats are generally held in silence with a lot of sitting down in meditation: here we were on the move constantly, exchanging, packing, exploring: a travelling retreat. So sitting in meditation for half an hour in the caves was deeply grounding and restorative. I became undistracted, and peace came very quickly: I felt grounded and expansive, even amid the tumult and bustle of India. My husband Stephen, also a Buddhist teacher, reminded us of mindfulness in daily life, with the reading: A monk is one who acts in full awareness when going forward and returning; who acts in full awareness when flexing and extending his limbs; who acts in full awareness when eating, drinking, consuming and tasting; who acts in full awareness when defecating and urinating… As we travelled, one of our challenges in such a crowded country was to find a secluded place to pee mindfully. While we drove through desert terrain, our mantra became ‘pee while you can’, and our coolest open-air lavatory was a banana plantation. The stupa hall in Bedsa was so small and exquisite that we could not resist doing some chanting. But, being secular Buddhists, not many of us knew any chants or, if we did, not the same ones, so we aborted this tentative sign of devotion. But the space suddenly echoed and resonated with the deep two-toned chanting of one of our members and we just stood in the resonating silence. The stupa itself was a mystery, reminiscent of Karla, but what was that at the top? Was it a lotus flower or a bodhi tree? In any case, it was an enigma we did not solve. Not all of the caves had a radical simplicity to them: our next stop was the Pandulena site. Built between the second century BC and the seventh century AD, it is a cornucopia of 24 caves, overlooking the industrial city of Nashik (population: 1.5 million), which was a major trading centre in ancient times. The cave complex had not only been a place of worship, but a lodging house for traders, and a sacred place for worshippers from other traditions such as the Jains. Kings, princes and wealthy merchants vied with each other for posterity, leaving a wealth of carvings and caves as their legacy. There was a profusion of statues, and we encountered our first lying Buddha, representing his death or nirvana. There was lovers’ graffiti on some carvings — the modern world catching up – and smiling cats in one small Jain chapel. Our next stop was the most splendid of rock-cut temples: the masterpieces of Ellora and Ajanta. We were apprehensive, knowing that the caves would be full of tourists, most of whom would not share our desire for travelling meditation and study. We focussed again on our listening meditation, letting the surrounding sounds pass through us without grasping at them. We found a large cave, somewhat out of the way, where we were surrounded by large Buddhas set among pillars, and I felt a sense of awe to be practising in such a place. One of the astonishing qualities of Ellora is the way in which the artisans and artists used the different lava flows to help with the carving. There were many volcanic eruptions over the millennia, and many different types of lava flow, with varied composition. Fine-grained rock was preferred for sculpting, and builders used the verticality and horizontality of the different strands of lava to make hewing easier, as well as taking advantage of the fact that basalt is softer when first excavated and hardens when exposed to the elements. We tried to welcome the cacophony as much we would have welcomed the solitude and peace we’d hoped to experience We were all secular Buddhists, which means that we practise in a way that is relevant to this time and world. We do not see ancient Buddhist ideas and practices as sacred and unquestionable. At the same time, we could be deeply touched by what was sublime in these cave temples — the majesty, the artistry, the dedication. I do not believe in the sacredness of traditional Buddhist lineages, which are created to assert the authenticity of different Buddhist traditions. But here in these temples, be they from the second century BC or the eighth century AD, I felt part of a faith and practice lineage that transcended any specific school of thought or personal allegiance. At Pitalkhora, where we had the prospect of spending a day in quiet meditation, we were touched by the neglect, as much as by the splendour. We knew that this small ensemble of caves was said to be in a fairly bad state of repair but it was also deep in a small valley with a brook, which made it an enticing proposition for a day among nature, water bubbling, greenery and silence. And so we climbed down into the valley but none of the above came to pass — there was a drought. Here was impermanence in action. Moreover, to attract tourists, some official had decided to have the caves rebuilt, so masons were in the process of cutting stones with a power saw. That day the villagers were also using the path running through the caves to go to a religious festival further down the valley. We meditated, studied and walked in silence, trying to welcome the cacophony as much we would have welcomed the solitude and peace we’d hoped to experience. In the midst of the destruction and rebuilding we saw a rare sight, some beautiful paintings. Because the front of the stupa hall had vanished due to the poor quality of the rock, the paintings were exposed to the light, enabling us to admire them and take pictures. There were some intriguing portraits of female assistants with intricate hairdos more like ancient Roman coiffures than the classic Indian styles. Our final goal was to visit the great stupa of Sanchi, a 10- to 13-hour journey from Bhopal, a popular holiday spot now better known for the gas leak disaster at the Union Carbide pesticide plant, which killed thousands of people in December 1984. The magnitude of the disaster was partly due to compromised safety standards at the plant, and officials who had realised what would happen that night and fled the city to save themselves without telling anyone else. We paid our respects at the abandoned plant where children now played cricket, and I found myself wondering about humanity’s strengths and weaknesses. It was hard not to fall in love with Sanchi, especially in the evening light, which makes all the sculptures and carvings appear golden. We meditated under a large tree overlooking the main stupa, built by King Ashoka to enshrine some of the relics of the Buddha. It served as the centrepoint for the construction of a large complex of temples, stupas, halls and monastic institutions, which were built around it between the third century BC and the 12th century AD. The main stupa itself was enlarged and embellished with decorated balustrades, staircases and gateways. It is too complex to describe; it must be experienced. It was lavishly carved and I was astonished by its pristine condition. For someone who loves designs, it offered a cornucopia of leaves, flowers, animals and many other motifs: it would take a week to see all the details. We only had a few hours and it was an enchantment. On the long journey to Bhopal, feeling apprehensive about the travel, I had reminded myself of a quote from the Roman poet Lucretius from On the Nature of Things: Behold the pure blue of heavens, and all that they possess, The roving stars, the moon, the sun’s light, brilliant and sublime – Imagine if these were shown to men now for the first time, Suddenly and with no warning. What could be declared More wondrous than these miracles no one before had dared Believe could even exist? Nothing. Nothing could be quite As remarkable as this, so wonderful would be the sight. Now, however, people hardly bother to lift their eyes To the glittering heavens, they are so accustomed to the skies. Here, in Sanchi, I realised that the essence of a pilgrimage was to experience in one’s flesh and with one’s own immediate senses the traces of the ancients, to be humbled and uplifted to be in close contact with such beauty. To be a pilgrim is to become unaccustomed to the skies, seeing them in a new light, with a newly open heart. | Martine Batchelor | https://aeon.co//essays/my-secular-pilgrimage-to-indias-ancient-buddhist-temples | |
Beauty and aesthetics | The Gothic is more than vampires and flying buttresses, burgundy lips and black lace: it is the thrill of transgression | As I walked the green miles of the Undercliff where the French Lieutenant’s Woman met her lover, there came a change of air. The dense undergrowth was obscenely verdant — bees worrying at pink rhododendron, peacock butterflies crossing my path — and now and then I’d burst out and find I stood at the cliff’s edge overlooking the sands of Golden Cap. It was impossible to imagine any other human setting foot where I’d set mine. When the path sank into a darker place and I found myself among the ruins of a great house, I shivered as if I’d grown cold. A high, pale-stoned wall with windows pointed at the upper edge put a black shadow at my feet, and fragments of its foundations were scattered about like broken teeth. A little further on I could see the wet black lip of a well. There was a thick silence. All that day I’d seen seabirds wheeling overhead. One or two chaffinches with their peach breasts blinked at me from the hedgerow. In the ruins, nothing but a magpie pecked aimlessly at the dust. Brambles put out creepers that caught my ankle as I passed, and scraps of cloud passing the empty windows had the appearance of blind faces mouthing at me. What had been a day of brightness and beauty altered all at once; I felt inexplicably anxious, as if all those broken stones were conveying to me the memory of something dreadful they’d once witnessed. I stood there a while. What I felt was not quite fear, but a disquieting thrill. Then I moved on. The path grew brighter. I forgot my unease. Years later, I sat among the ruins of the Roman Forum, in the shadows of the Temple of Castor and Pollux. I inclined my head, convinced that, if I only tried hard enough, I’d hear the bare feet of vestal virgins padding through the marble halls. I sat because I could not stand: my mind could not comprehend the sublime magnitude of what I saw, and my body had given up in sympathy. It was to be a long time before I understood that what I’d encountered there in the Undercliff in Dorset, and again in the shadow of a Roman cypress, was rooted in a tradition as pervasive as incense smoke, and every bit as hard to grasp. I’d lay good money that you’ve used the word ‘Gothic’ in the past month or so. A swift survey of popular culture reveals that the Twilight novels of Stephenie Meyer are Gothic, as is the style (both in song and dress) of the indie rock band Florence and the Machine. UK Vogue magazine reported on 2012’s catwalk trends as ‘high on Gothic glamour’, and there were corresponding articles in style magazines on how to perfect a Gothic maquillage of pale cheeks, smoky eyes and blackberry lips. Vasari the tastemaker purses his lips and makes a terse little note on its failings: it is an example of that most deplorable of styles Tim Burton’s films, as we all know, are Gothic; so are the red-carpet frocks of his wife, the actress Helena Bonham Carter. The British novelist Cathi Unsworth’s crime-noir Weirdo (2012) is billed as ‘a retro-Gothic thriller’. Pallid teenagers in riveted leather coats are Gothic. Sarah Waters’s second novel, Affinity (1999), is neo-Victorian-Gothic. The new album from Nick Cave and the Bad Seeds, Push the Sky Away (2013), is, they say, ‘Goth-blues’. A new paperback edition of Thomas Hardy’s Tess of the D’Urbervilles (1891) shows on its cover a bitten scarlet strawberry against a black background. This, apparently, is also Gothic. What draws together a wine-dark lipstick stain, a glittering boy-vampire, a tale of murder in a Norfolk seaside town? What do I mean when I say ‘Gothic’? For that matter, what do you mean? We can use Google’s Ngram service to track the word’s published popularity from 1500 to 2000, plotting it against other expressions in use over the same period. With ‘Gothic’ in the ascendancy, so too were ‘villains’, ‘ruins’, ‘madness’ and ‘terror’; together, these surged in use from 1750 and peaked towards the end of that century. After 1850, more nebulous qualities began to creep up on the villain in his castle: words such as ‘eerie’ and ‘uncanny’ grew in currency. Intriguingly, at this point the Gothic begins a decline — and then, as if required for new purposes, it rallies again. The Ngram exercise is diverting, but ultimately touches only at the periphery: it is like trying to define a planet by its moons. To get at the heart of the matter, it is necessary to cross half a millennium and stand beside the Italian art historian Giorgio Vasari in the shadow of Reims cathedral in 1535. Exquisite in his velvet coat, Vasari appraises the high ribbed vault and the flying buttresses, the spitting gargoyles and the glittering rose window, the saint-pierced façade and the towers that would have pleased Babel’s townsfolk. In its scope and beauty, it is surely the epitome of mankind’s achievement, bought at incalculable fiscal and human cost. But Vasari the tastemaker, the ultimate Renaissance man, purses his lips and makes a terse little note on its failings: it is an example of that most deplorable of styles — the Gothic. To Vasari, who prized Classical principles in architecture as in all other modes of art and thought, the cathedral — from its deep labyrinth to its almost incomprehensible height — was a piece of barbarous savagery. It struck the visitor dumb with awe, provoking not clarity of reason or a Platonic appreciation of beauty, but a kind of insensate thrill. In raising the spectre of the Goth, Vasari was recalling — with all the terror of inherited folk memory — the Sack of Rome in 410, when Germanic tribes dismantled the city that was the centre of the world. The ecclesiastical architecture of the Middle Ages represented as much a threat to Renaissance ideals as the desecrating Visigoths did to that ailing Empire more than a thousand years before. So much for origins, buried beneath the stones of the Forum and carved in the lintels at Reims. Does Vasari’s criticism have anything to do with villainous monks with erotic habits, black lace dresses in the pages of Vogue, or talc-faced teenagers sporting their gilt crosses down by Whitby Bay, the home of Dracula? To understand the potency of the Gothic, it is useful to think of the word as having ‘gone viral’, in the truest sense. Like a virus, it shifts and mutates across the centuries, adapting readily to context, exploiting the weaknesses of its host. Its symptoms are elusive, and often seem contradictory. Vasari’s notion of the Gothic assumed that those desecrating Germanic tribes were culturally coarse, with none of the Classical refinements in philosophy, society or art. Move on to the Enlightenment, and the symptoms of the Gothic have subtly altered. To liberal Whig ideology, the Goth was not a symbol of barbarism and destruction, but of a libertarian and democratic political lineage reaching back from the Glorious Revolution of 1688 through the Saxon Witenagemot to the Sack of Rome. Say ‘Goth’ to a Whig and his heart would quicken, not shrink; it’s no insult, but a rallying cry. Edmund Burke, contemplating the distinction between the sublime and the beautiful, could have illuminated Vasari’s moment of repulsed awe. His Philosophical Enquiry of 1757 might not have dealt expressly with the Gothic, but by linking aesthetics to metaphysics it had a profound effect on the development of the literary Gothic. To Burke, the beautiful and the sublime were mutually exclusive. The appreciation of a beautiful object inspires ‘sentiments of tenderness and affection’; it is a reasoned aesthetic response, characterised by ‘joy and pleasure’. Conversely, to encounter the sublime is to lose all reason, and all joy. It is instead to experience ‘astonishment — that state of the soul in which the mind is so entirely filled with its object that it cannot entertain any other’. An aesthetic response is effectively impossible, since the sublime is a source of such obliterating light or darkness that the object itself is removed, and we are moved instead to awed astonishment. Burke alludes to the obscurity and gloom of heathen temples, affirming that link between a dark and massy built environment and an encounter with the Gothic. Had Vasari only encountered Burke on the cathedral steps, or if I had met him as I walked dumbfounded through the Forum ruins, he might have offered us both a shrewd diagnosis. And so to 1762 and the study of Richard Hurd — priest, scholar and fellow of Emmanuel College, Cambridge. He will live long, achieve a bishopric, and serve as tutor to the Prince of Wales; but at present he is preoccupied with a vanished age of courtly love and high romantic ideals. His Letters on Chivalry and Romance examine medieval chivalry through the rosy lens of epic poetry, and begins with a startling reversal in the meaning of the Gothic: The ages, we call barbarous, present us with many a subject for curious speculation. What, for instance, is more remarkable than the Gothic CHIVALRY? or than the spirit of ROMANCE, which took its rise from that singular institution?Hurd saw the Gothic neither as threatening barbarism nor as a liberal political ideal, but as a vanished (and almost wholly fictitious) epoch of ladies dispensing silk favours to their knights-at-arms. With this one meditation he laid a foundation stone of the Romantic age. But if ever there was a man to succumb to this most seductive of ailments — a chimera of the barbarous, the sublime, the chivalrous, and the liberal — it was Horace Walpole. He was an 18th-century Whig politician, art historian and belle-lettrist, troubled in matters of faith and of the heart. With Strawberry Hill in Twickenham, his sugar-coated replica of the ‘gloomth’ to be found in the cloisters of a medieval cathedral, he prefigured the Gothic Revival in architecture. Unhappily homosexual, deeply romantic and profoundly committed to liberal ideals, when he wrote The Castle of Otranto (1764) and gave it the subtitle ‘A Gothic Story’, he was claiming the Gothic inheritance in all its guises for the world of literature. Walpole’s novel bears no resemblance to the edifying realism of its contemporaries: it is all villainy, threat, flight, obscurity, and madness. It evokes the sublime and the fantastic. Its cod-Medieval setting was something Hurd would have recognised; it is gleefully suggestive of erotic transgression. It is this quality of drawing on our own secret urges that makes the Gothic so irresistible Naturally enough, critics objected to its preposterous narrative and marvellous events. Walpole might have failed by most familiar criteria, but he achieved something altogether more troubling. The defining feature of the literary Gothic is that it exploits the reader’s own devices and desires: it cannot function unless the reader is affected by events as profoundly as any of the characters. Those first readers of Otranto were unlikely to be suffering Count Manfred’s incestuous desires, or cowering like Isabella in crypt and cave. Nevertheless, like Walpole — and like us all — they’d have brought their own secret anxieties and longings to the text. That first encounter with Gothic fiction tested the boundaries of civilised society, hinting at dark places where vices might be explored. Walpole published his novel at the passing of the Enlightenment age. There can be no doubt that the seductiveness of the sublime — whether encountered in nature, as Burke largely supposed, or within the pages of a Gothic novel such as Walpole’s — thrives most in the aftermath of periods of clarity and reason. The Enlightenment did not dispense with faith but illuminated it, affirming that since a rational God had ordered the universe, order must be found in its workings. The terror of sublimity was a recourse for those who still hungered after strangeness, and after numinous forces that could not be accounted for by the stern light of reason. In the wake of Otranto, the Gothic spread with remarkable speed, always bringing with it a delightful terror and a roguish testing of accepted behaviour. In The Monk (1796), Matthew Lewis simultaneously revolts and seduces. In Melmoth the Wanderer (1820), the impoverished Irish cleric Charles Maturin invents Gothic horror and satirises the established church. Ann Radcliffe’s tales are sublimely affecting but always shy away from outright supernaturalism. Bram Stoker’s Dracula (1897) plays on post-Darwinian anxieties: where does the man end, and the animal begin? Sigmund Freud’s essay ‘The Uncanny’ (1919) sits alongside Burke’s Philosophical Enquiry as an essential clue to the meaning of the Gothic. It has never been an aesthetic so much as a state of mind. Freud conceived of the uncanny – an approximation of the untranslatable German unheimlich – as a state of indescribable unease and terror, drawn not from encounters with ghouls and beasts but from something horribly familiar. To be heimlich is to be homely, of the domestic sphere, but also concealed, secret, hidden. What is unheimlich is therefore simultaneously unhomely and revelatory: it spells an unwelcome encounter with our primitive desires and social taboos, an encounter all the more horrible because it is with ourselves. It is this quality of drawing on our own secret urges that makes the Gothic so irresistible, and accounts for its limitless adaptability. It is defined not by an adherence to a series of defining features, but by our response to it — I am deliciously uneasy, repulsively thrilled, sublimely afraid. It gives licence to sensations that we feel but cannot admit, and at precisely the same time cloaks those sensations in such strangeness that it is possible to say: ‘It is only an absurd tale of vampires and shadows; it has nothing whatever to do with me.’ The Gothic provides a hiding-place and a place of consecration for those seeking what lies beyond the boundary of society and reason, a sublime contagion to which we are never quite immune. | Sarah Perry | https://aeon.co//essays/gothic-the-ancient-roots-of-a-dark-thrill | |
Politics and government | Colombia’s FARC guerrillas still resist the coming peace. Is it drug money or the romance of revolution that’s to blame? | In October 2012, the Colombian government sat down to peace talks with the Revolutionary Armed Forces of Colombia (FARC) — among the last of Latin America’s left-wing guerrilla armies. Hopes were high that an end to a conflict that has raged for close to 50 years might be at hand. Colombians, weary of decades of violence, are growing impatient for results. Following the end of the Cuban Revolution in 1959, every country in South America saw at least one rebel army spring up within its borders. But when the USSR collapsed in 1991, Soviet funding for the world’s revolutionary socialists dried up, and most of Latin America’s guerrilla armies had to return to civilian life. Not the Colombians. Until recently, the FARC were the biggest guerrilla army in the world, and controlled around a third of Colombia’s land area. As recently as 2001 they came close to overrunning the capital, Bogotá. So why have the FARC lasted as long as they have? To most outsiders, Colombia is the epitome of festering, futile conflict. Until recently, Colombia had more internally displaced civilians, printed more fake dollars, exported more prostitutes, armed more children, had more landmine victims, and processed more cocaine than any other country in the Americas (and in some of those categories, in the world). While it might be hard to spot the idealism in all this, after 13 years of musing on Colombia, I’m convinced that the inability to square rhetoric with reality is at the root of the longest running internal conflict in the world. Rhetoric — the art of using speech to persuade, influence or please — is highly prized in Colombia. It is a country where university students sell their poems on buses, a country that has more singer-songwriters than any other I have ever been to. Its newspapers are remarkable for their scarcity of reporters — most of whom would probably get shot if they did their jobs properly — and their preponderance of commentators, pundits and experts. Such is the respect paid to rhetoricians in Colombia that anyone with a university education and an opinion is deemed worthy of the title ‘doctor’. This love of rhetoric means that Colombia is a country with a plethora of academics, spokesmen and politicians, all trying to outdo one another with their high-mindedness. The bookshops of Bogotá stock hundreds of books analysing the armed conflict. Although these books are studiously ignored by all the parties to the fighting, they lend an air of authority to the country’s hidebound education system. It’s hard to believe that the FARC started out as modest, peace-loving smallholders who just wanted to be left alone to farm their plots Lawyers, another breed of rhetorician, are also thick on the ground in Colombia, notwithstanding the fact that the legal system is notoriously slow, ineffectual and corrupt. As a result, Colombian prisons are full of people who are still waiting to go to trial. Meanwhile most crimes, however heinous, go unpunished. I have talked to many victims of the conflict since I started travelling to Colombia in 1999. In a country that combines verbosity with indifference to the suffering of others, it should be no surprise that so many of them called for una vida digna, a dignified life. The FARC are past masters at rhetorical posturing. Here’s what they posted on their website after the death of one of their senior comandantes in 2010: It is with profound remorse, clenched fists and chests heavy with feeling that we inform the people of Colombia and our brothers in Latin America that Commander Jorge Briceño, our brave, proud hero of a thousand battles, commander since the glorious days of the foundation of the FARC, has fallen at his post, at his men’s side, while fulfilling his revolutionary duties, following a cowardly bombardment akin to the Nazi blitzkrieg. Reading such hyperbole, it’s hard to believe that the FARC started out as modest, peace-loving smallholders who just wanted to be left alone to farm their plots. With origins as the military arm of the Colombian Communist Party, the FARC see themselves as the representatives of Colombia’s poor agricultural labouring classes, the campesinos. Bands of poor, usually landless campesino labourers had been draining swamps and clearing jungle since the days when Colombia was ruled from Madrid. In the early days of the FARC, after the extreme violence of the 1950s and early ‘60s, large numbers of campesinos found themselves on virgin land once again, living in tin shacks miles from anywhere, where there had never been schools or hospitals, much less police stations or army barracks. The guerrillas organised road-building crews, parcelled out land, and provided much needed law enforcement. Anyone hoping that the government and guerrillas will sign a conclusive peace treaty this year would do well to listen to the songs of FARC troubadours such as Julián Conrado. They are testament to the alliance forged by guerrillas and campesinos in those early days. Battered as it is, this alliance endures, at least in some far-flung corners of the countryside, to this day. To some listeners, Conrado’s voice sounds defiant, proud, and resolute. Others, such as the thousands of Colombians who marched against the FARC in the 2008 protests, only hear piety, self-pity and bluster. By the time the latest round of peace talks between the FARC and the Colombian government got underway in 2012, Venezuela, Ecuador and Brazil each had a democratically elected, left-leaning government. When even the late Hugo Chávez, the most radical Latin American president of recent times, tells the FARC that the armed struggle is over, it’s hard to escape the feeling that its brand of revolutionary politics is an anachronism. But if the FARC are indeed ‘walking ghosts’, as the American journalist Steven Dudley calls them, that’s not how they see things. Rather than relics of a discredited tradition of vanguardist radicalism, they regard themselves as akin to the early Christian martyrs, driven into the wilderness by the Romans in Bogotá. Deluded they might well be, but the guerrillas have made a virtue of their isolation — and, thanks to the cocaine trade, they’ve also made a business out of it. The romance of revolution isn’t dead yet. La Macarena is a small jungle settlement, about 170 miles due south of Bogotá. Until the army took control of the town in 2005, it was controlled by the FARC. In the town’s church was a mural, painted over soon after the guerrillas were forced out, that depicted Jesus sitting with his 12 disciples. Or at least, that’s who they looked to be at first glance; on closer inspection, they turned out to be members of the FARC high command. Their expressions — devout and pained — were fitting enough, though the combat fatigues gave them away. David Hutchinson, a British banker who spent the best part of a year as a captive of the FARC, told me that he often heard his young guards saying their prayers after lights out. One read: ‘Dear Che in heaven, who is watching over me, how I love you Che, may I follow your example.’ For many campesinos, the FARC’s communism is of a kind with that of Jesus Christ, another friend of the poor who endured all that the rich and powerful could throw at him. The iconography of the revolutionary left in Latin America often verges on a mystical reverence for Che-Jesus, with the splendour and mystery of the continent’s mountains and jungles standing in for paradise. ‘They live together in a camp, and they play practical jokes and go fishing and sing, and they have a very active sex life. And by the age of 25 they’re all dead’ Critics of the FARC say that the growing proportion of teenagers in their ranks is evidence of their desperation, as well as their callous disregard for children’s rights. The FARC counter that their youngest recruits are often orphans, or have been abused. Without the guerrillas, they argue, many would drift into gangs, drug trafficking, or the remnants of the paramilitaries. What neither side likes to admit is that teenagers are drawn to the FARC’s rhetoric because it is based on familiar Christian tenets of deliverance from evil, redemption from a life of sin, and the promise of a return to a state of innocence. Something similar can be seen in Mexico, where the cult of Santa Muerte — Holy Death — draws on the rituals of Catholicism to adorn a religion centred not on the virtuous life, but on the power of fate. Santa Muerte is popular wherever the pull of criminal circles is strong. In Colombia too, the poverty, danger and general precariousness of daily life has only strengthened popular religiosity. Most of the FARC’s young recruits learn how to handle a gun before they learn how to read or write; their religion, like their politics, has to be straightforward, simple and purgative. What Christianity and communism have in common, at least in their Colombian forms, is a glorious, transcendental end. Keep that in mind, and the young recruit can explain away all manner of venal means, which is how one man’s narco-terrorist can be another’s liberator of the people. I asked David Hutchinson why he thought young people joined the FARC. ‘They get given a uniform, cigarettes, food and a rifle,’ he told me. ‘And companionship. It’s all rather boy-scoutish. They live together in a camp, and they play practical jokes and go fishing and sing, and they have a very active sex life. And by the age of 25 they’re usually all dead. It’s pretty pathetic.’ When the first representatives of the Cali cartel came down the river Caquetá in 1988, offering coca seeds to any campesino who would take them, and promising to pay good prices come harvest time, the FARC were intensely wary. Isolated as they were, even the comandantes knew about the harm crack cocaine was doing to Americans. But the campesinos, whose guardians the FARC claim to be, were broke and the cocaine business was lucrative. So the guerrillas, ever pragmatic, came to an accommodation with the new arrivals. They would tax the growers, just like they taxed the smugglers and the legitimate businesses operating in areas under their control. They were confident that their ideological purity would resist any threat of contamination by the drugs business. When I was in Colombia in 2011, I spoke to two former guerrillas about the impact the business had had on the FARC. Nicolas had joined up in 1991, just as the rest of the world was celebrating the collapse of communism. He’d spent 13 years in Sumapaz, the huge highland moors to the south west of Bogotá, where he had worked on a land reform programme that the guerrillas had forced upon one of the area’s big landowners. Over time, Nicolas told me, the FARC’s cautious coexistence with the cocaine trade turned into something more like symbiosis. As the Americans ramped up their war on drugs in the mid-’90s, Colombia’s traffickers needed better protection for their laboratories and smuggling corridors. At the same time, the FARC were more convinced than ever that the détente between Colombia’s drugs traffickers, the army and the paramilitaries made a peaceful road to socialism impossible. The guerrillas would have to take power by force of arms. They needed money for weapons; the cocaine business was awash with money. ‘With the drugs came a lot more money,’ Nicolas told me, ‘but that created a lot of false needs. When I first joined up, the FARC was a guerrilla army for the people, but not everyone was in uniform and not everyone was armed. But as the local commanders got richer, they were just thinking about how to get the next piece of sophisticated weaponry. In time, the organisation became completely militarised.’ ‘The comandantes used to be happy to have a watch,’ Nicholas continued. ‘Now they wanted one with 150 memories, and a calendar for I don’t know what. The idea of taking their programme to the people — organising them to take collective action in defence of their interests, or building the roads and schools that they needed — all that went out of the window. Capitalism infiltrated socialism.’ These days, the FARC’s enemies say that they are no more than criminals, competing with former paramilitaries for control of the cocaine business. As well as drug running, the guerrillas are accused of the violent displacement of uncooperative communities, the widespread use of landmines, and the kidnap of thousands of innocent people for ransom. Latin American leftists believed that all they needed to overturn the established order was a mad voluntarism — the idea that with a spirited challenge to the oligarchy, a handful of men in the hills might sweep to power In the process of amassing power, the ‘army of the people’ have lost much of their former popularity. Perhaps the older comandantes, veterans of the long struggle to build a popular political alternative in the Colombian countryside, still have faith in the FARC’s revolutionary project. But most of the rank and file are united only by a desire to escape poverty, or avenge the death of a relative killed by the army. To the extent that they have an ideological commitment, it is to defend an inherited cause, mouthed without conviction by those afraid to challenge their superiors. But even that retains a power that is hard for outsiders to understand. Most commentators blame the longevity of the FARC on the drugs trade, arguing that without the revenue from producing and trafficking cocaine, the guerrillas would have had to sue for peace long ago. Others point out that Colombia is still one of the most unequal countries on the planet. More than half the land (52.2 per cent) is owned by just 1.15 per cent of the population, according to a UN Development Program report from 2011. More than 40 per cent of Colombians live in absolute poverty. Some blame the FARC’s existence on the brutality of the Colombian army and its erstwhile paramilitary allies — with or without the cocaine business, they seem to suggest, Colombia is so riddled with iniquity that an armed insurgency is practically inevitable. This is all true, but there is more to it than practicality and economics. In 2012, shortly before his death, the British Marxist historian Eric Hobsbawm reflected on the failure of the revolutionary left to seize power in Latin America. He pointed out that after Fidel Castro had defeated the Cuban army with relative ease, generations of Latin American leftists believed that all they needed to overturn the established order was a mad voluntarism — the idea that with a spirited challenge to the oligarchy, a handful of men in the hills might sweep to power. This was as seductive as it was mistaken. The mad voluntarism of Latin America’s guerrilla armies goes to the heart of the conflict in Colombia. Not that such idealistic wilfulness is confined to leftists: Colombia has long been yanked this way and that by ideologues from the right, as well as the left. Shortly before his death in 1829, Simón Bolívar warned that without an external enemy to unite the country, Colombians were doomed to perpetual civil war. He was proved right: civil wars consumed the passions of rich and poor Colombians alike for the best part of the 19th century. These wars had multiple drivers, but soaring idealism, base intolerance of other people’s ideals, and a readiness to take up arms were recurring themes. To defend the Catholic Church’s monopoly over their children’s education, Conservatives were prepared to slaughter the menfolk of entire villages. In the name of free trade and federalism, Liberals burned down farms and killed policemen by the score. At the University of Bogotá, riots even broke out over the utilitarian, anti-clerical teachings of the philosopher Jeremy Bentham, who Catholics blamed for an earthquake that struck the city in 1826. So it was that in the name of various, little-understood foreigners (God being one of them), a tiny elite of semi-educated landowners mobilised thousands of their impoverished farmhands, draining the Treasury of what little money it had and laying waste to some of the best farmland in the Americas. The death toll of La Violencia, the period between 1948 and 1958 during which the Liberals and the Communists fought the Conservatives (and each other) in a bloody civil war, was around 300,000, mostly peasants. In other Latin American countries riven by sectarian conflicts in the years after independence, one side or other eventually won out. But Colombia’s Liberals and Conservatives were mostly evenly matched in terms of money and manpower. With victory seemingly always at hand, peace was only ever a chance to rest and rearm before the inevitable return to the fray. Violence became a way of life, ingrained into employment practises and land ownership, and sustained by the church, the press and the universities. If the ideals of Liberalism and Conservatism lent dignity to the struggle, the reality of perpetual war stripped it away. Little wonder that Colombia is full of empty symbolism, and young men in the poverty-stricken countryside can still be stirred by FARC recruiters. The national anthem blares out on TV, interspersed with endless advertisements celebrating the valour of the armed forces in their daily struggle with the terrorists. Of the thousands of murders recorded in the capital in the past three years, only a third of cases had a defendant brought to trial, let alone prosecuted. Despite such awful police work, the flag outside the national police academy on the Avenida El Dorado must be the size of half a football field. Formally, the constitution and the laws it frames address many of the problems that drive the conflict. But the formal face of Colombia is like a doll’s house — a pristine replica of a much larger building that has long been abandoned to the elements. The truth is that, behind the façade of legalism, Colombia’s laws carry little weight on the ground. The rhetoric of the war on terror taps into a long tradition of highfalutin bluster, but talk is cheap. Building institutions and enforcing laws, on the other hand, is going to be expensive. Both sides in the Colombian conflict carry a great deal of suspicion, bitterness and resentment. As Nelson Mandela once said: resentment is like drinking poison and then hoping it will kill your enemies. If Colombia’s guerrillas are to renounce violence for good, both they and the Colombian government have to drop the rhetoric of war, with its insistence on total domination and enforced forgetting of past crimes. Only then can some dignity be restored to the political life of Colombia. It is not only arms that must be laid down, but the grand and tattered words of revolution. | Tom Feiling | https://aeon.co//essays/the-toxic-romance-of-revolution-keeps-guerilla-warfare-alive | |
Computing and artificial intelligence | What to eat, when to meditate and whether to call your parents: can self-monitoring tools make a difference? | Last Sunday, my mother telephoned. We had a good chat. Afterwards, instead of switching off my iPhone, I opened an app called Lift and, beside the words ‘Call mom/dad,’ I awarded myself a big green tick. I noted this was the ninth time I had spoken to my parents in the last month — higher than in the period immediately before, but not as high as I’d hoped. I might never have started logging calls with my parents if I hadn’t met Kevin Warwick, professor of cybernetics at Reading University. Warwick achieved global fame 15 years ago by inserting a microchip into his arm to open doors. Subsequently, he has carried out many additional experiments, using his own body as a laboratory. Meeting him in 2011 at his university lab, for a story in The Sunday Times, I learned that many of his students have done the same with their own bodies. ‘We have one who put an electric current into his tongue as part of an experiment,’ Warwick told me. ‘The next day when I saw him he couldn’t speak properly. He’d got the voltage wrong. But he’s OK now, because the swelling has gone down.’ It’s in the nature of experiments that some will not work out happily. But ‘self-hacking’ — using data collected about yourself to spot patterns — has a respectable history. The Australian gastroenterologist Barry Marshall believed that stomach ulcers were caused by bacteria rather than stress. To prove it, in 1982, he swallowed a Petri dish of H. pylori and immediately developed severe colitis. In 2005, Marshall was awarded a Nobel prize. Inspired by Warwick and his students to investigate further, I discovered that many thousands of people are carrying out experiments on themselves — a good number of them outside academic institutions — as part of the Quantified Self (QS) movement. The movement was founded in the US in 2007 by Gary Wolf and Kevin Kelly, editors at Wired magazine. Wolf and Kelly felt that the explosion of personal tracking technologies presented a kind of mirror in which we might see ourselves. Today, something like 18,500 QS members belong to more than 100 groups, in 31 countries around the world. They represent perhaps the most technologically advanced adepts of the fashion for self-improvement which has established itself amid declining religious practice and increased social acceptance of talking cures of one kind or another. The spiritual home of the QS movement is a website: QuantifiedSelf.com. One of the leading exponents in the UK is former banker Adriana Lukas, who has been running the London QS group since its inception in July 2010. The group meets for regular ‘show and tell’ sessions. At one, a man named Jon Cousins described how he launched a study of his own mood swings to demonstrate to doctors that he is bipolar. His results led the Institute of Psychiatry in London to put money into further research on a platform to help others. ‘It was the first research investment of its kind to have been instigated by a patient,’ Lukas told me. Until now, she said, the history of medicine has been the history of doctors, whose priestly wisdom has been delivered to clueless, passive recipients as if it was gospel. Cousins’s story reversed that. Psychiatrists had asked him to map his moods for three months, so he developed a simple means to do it online every day and share his mood map with friends. They provided a supportive network, a bit like Weight Watchers. His moods immediately started to improve and became more stable: the act of monitoring had itself produced an effect. It felt like I’d moved into a tiny apartment with my inner statistician, who didn’t appear to be very talented, and wasn’t enormously fun to be with In researching the process of change for my book How To Change the World (2012), I had compared the broad, sweeping theories of political scientists such as Gene Sharp, the American advocate of non-violent struggle, with the insights of the self-help industry. In both cases, change begins with observation — noticing what needs to change — followed by a clear declaration of that observation. Listening to Lukas talk about Cousins confirmed this pattern. I was exhilarated by what QS might teach us, not only in relation to one man’s mental health, but about how we can collectively effect change. What should I measure, and how? I happened to mention the QS project to my accountant, a woman who derives enormous satisfaction from statistics, and who would (if allowed) send me charts showing every dip in my earnings and peak in expenses. She’d never heard of the QS movement, but had unwittingly been a part of it for some time: she was using an iPhone app, Symple, to track certain medical symptoms and their possible causes and cures. I downloaded Symple, which allowed me to list as many symptoms as I liked, and score them each day (on a range from none to severe), as well as what Symple calls ‘tags’ (things that cause or alleviate the symptoms). I hoped to remedy tiredness, an eczema-like skin condition, stiffness, and irritability. Over two weeks, it became clear that tiredness was the thing that bothered me most. I was delighted to have even this highly subjective data to give me a sense of what really mattered. I felt like a man who, on being given a map, finally realises that he was lost all along. Having noticed my tiredness problem, then what? The remedy seemed too obvious: go to bed earlier. I mentioned my findings to my wife, who remained unfascinated. I was learning that the QS experiments of one person might not be at all interesting even to that person’s nearest and dearest. Over the following days, I observed that she made no obvious effort to go to bed earlier; I was at liberty, she told me, to go to bed earlier myself. My other problems were more complex. If you have a simple pair of variables (to bed early/to bed late) it’s relatively easy to choose between them. But most of life is full of variables. For instance: how am I supposed to list, let alone measure, what makes me irritable? Lacking a better idea, I decided to track my coffee intake (negligible), cups of green tea (too many) and glasses of water (nil, shamefully, some days). I downloaded another app, Trakr, to count it all. After a few days I concluded that because Trakr does not create graphs, showing whether my intake had risen or fallen, it was useless. By then I had logged, among much else, 220 minutes of walking, 20 slices of bread, five Facebook posts, and 62 cups of green tea. I’d learnt nothing. And it felt like I’d moved into a tiny apartment with my inner statistician, who didn’t appear to be very talented, and wasn’t enormously fun to be with. Seeing others getting satisfaction from their QS experiments only added to my frustration. One friend told me he’d run a half-marathon with a monitor that told him his average speed over every mile and how many people he’d overtaken or been overtaken by. He used the information to pick up his speed, and produced a personal best time. Undefeated, I loaded a more complex app, TracknShare. This was a bit like Symple, but with a broader scope than health. Within each category, TracknShare provided variables I might wish to monitor, which explains how I found myself entering data on the number of portions of meat/beans I had eaten in any given day. It didn’t take many days before I made a discovery: I was bored. Meat/bean intake might be utterly absorbing for some, but not for me. And I certainly had no intention of inflicting that information on my Facebook friends and Twitter followers, as TracknShare would have liked me to do. ‘The unexamined life is not worth living,’ Socrates said. Ditto the over-examined life, as QS sceptics like to point out. In seeking to quantify myself towards self-improvement, I was embracing and quickly abandoning various apps (not all of them free) but discovering nothing significant. I was lucky enough not to be very interested in my health, but I hadn’t found the things that mattered, much less identified ways to understand them through statistics. I was failing badly. Why should we use statistics at all? Socrates didn’t. Neither did he insert gadgets into his body. Granted, many insights can be generated by the ingenious presentation and interpretation of statistics, but even enthusiasts find some data impenetrable. Eric Boyd, a leading QSer from Canada, is due to tell the European QS conference in May about data collected by his wearable Nike Fuelband, which monitors his daily biometrics. Some of the spikes in the graphs remain utterly mysterious to him. I wish Boyd luck resolving that mystery, but I don’t have the time or interest to pore over graphs that refuse to function as any kind of mirror. Julia Cameron, in her classic self-help book The Artist’s Way (1991), which makes no references whatever to apps, microchips or the internet even in its revised 2002 edition, encourages us to effect significant personal change by writing daily ‘morning pages’ — essentially automatic writing that can be about anything and is not designed to be published, shared or even re-read by the writer. Noticing what comes up in our morning pages is a good way to find out what really matters to us. While running my QS experiments I did some of Cameron’s low-tech self-improvement work too, and found it to be at least as helpful as anything on my iPhone. Life coaches use this kind of journaling with their clients: ‘Make a note of the number of times you do this,’ they might say. The very act of noting it makes us stop doing the things that are unhelpful, and do more of the things that do help. Until last summer I had never had the experience of being coached. I had low expectations, but I knew and trusted the woman coaching me. In the event, I was blown away by the effect of being allowed, encouraged and even cajoled to talk about the things that really mattered to me and the things I wanted to do. My sessions led directly to a number of significant achievements, and to a slightly different sense of myself. I was so impressed, I started training as a coach, to give others a space to share what most of us usually keep locked up. When I discovered the Lift app, I saw that it offered some of the same benefits. Unlike other apps I’d used, it’s intrinsically social, but without necessarily sharing what you do on Facebook or Twitter. Like TracknShare, Lift offers suggestions of habits to consolidate, many of them already being actively pursued by other Lift users globally. I selected habits from a list of popular ones: drink more water (50,000+ participants). Easy ones: take multivitamin (19,000+ participants). And I made up some of my own, only to discover that others were doing these too: morning pages. At the end of each day I ticked the habits I’d done and sometimes added a note giving more detail. My screen might refresh with details of another person, sometimes on the other side of the world, who had just ticked the same item, and if it felt right I might give them a ‘prop’ (like a Facebook ‘like’). Interestingly, I found that propping others felt just as good as when, occasionally, others propped me. Now, I don’t know who the people are who give me the occasional prop, but the very fact that they’re able to see what I’m doing makes me feel more committed than I was with the other apps. This quality of relationship, of sharing, is hugely important. A recent survey of New York’s QS Meetup group found that only 49 per cent of them share specific data with others — for reasons related to privacy that we’ll come back to. What is telling, however, is that they come together in a meetup, to be together and to share something. QS, it seems, is fundamentally social. Like any other form of self-improvement, QS is not about securing an ultimate fix. It’s about noticing how you live now, in fine detail On Lift, the very act of signing up to a particular habit made it more difficult to overlook the step that I wanted to happen. For instance, after giving myself a tick for calling my parents, I noticed the next item on my daily list: meditation. Having noticed it, I immediately slipped off my chair and sat in the lotus position on the kitchen floor, shut my eyes and did a spot of zazen. Afterwards, I gave myself a big green tick, and then my wife appeared, and asked me to help her with something I’d not been looking forward to. Surprisingly, I found I was able to help without the slightest flicker of irritation, and without once looking at my watch to see when the torture might end. In short: I was happy. Was this really because of the meditation? I think so. Self-hacking can be about self-awareness in the moment, rather than always trying to move towards a significant longer-term goal — a way of creating beneficial everyday habits. After all, cleanliness is a state to aspire to, but having achieved it, even the most self-satisfied people accept that they will at some point in the near future need to bathe or shower again. I have no difficulty remembering to shower, but if meditation is helpful, and I want to do it more, it can help to log it on my iPhone. I take my inspiration in this from Nancy Dougherty, a QSer in the US who wanted to be more mindful. She started tracking her smiles, by means of a couple of electrodes attached to her temples. Every real smile, causing the skin to crinkle around her eyes, lit up a set of Christmas tree lights that she wore on her head throughout the day. By this means she noticed that everyday office interactions were not merely task-oriented, but were also opportunities to ‘express joy together’. When my phone reminds me to meditate, I shall remember Dougherty, lighting up among her co-workers, and causing them to light up too. QS doyens suggest that the next generation of gadgets and devices will proactively track users and analyse data, suggesting ways to alter routines in order to hit the metrics we set. For instance, if wearable devices notice that you are sitting down for too long, they might tell you to get up and stretch your legs. I’m not convinced this will work. Every day at 10pm, the Symple app on my iPhone continues to remind me to input data though I stopped using Symple weeks ago. The mere fact of having programmed a device at some point in the past to make certain suggestions to us in the present does not mean we will pay any attention. For similar reasons, QS enthusiasts say it’s better to monitor data manually than have devices that do all the work. When it’s automatic, they report, the significance of what is recorded often escapes them. What this implies is that QS is a kind of secular ritual. To be meaningful, it can’t be carried out on our behalf by gadgets. Additionally, QS can be like a kind of prayer that teaches us whether we really care or not. Tracking myself on Lift for several weeks, I’ve realised that I’d failed to give myself even one tick for gardening on my allotment. In the past, I’ve enjoyed pottering about there. Only by logging it have I forced myself to ask whether going to the allotment remains a pleasure, or if it’s turned into a dreary duty. The monitoring created a measure of mildly painful cognitive dissonance — the distressing mental state of finding yourself doing and feeling things that don’t fit with what you know, or think you know, about yourself. What makes this more distressing is that it’s public. The way we engage on social media leaves a ‘digital fingerprint’, or ‘data exhaust’, which could possibly be valuable to us and to society at large. Indeed, sometimes the public nature of the digital fingerprint is the whole point. An app called DidThis encourages the spread of beneficial actions — you do something and, if other users like it and do it too, that action creeps up the ranking. This is about more than individual self-improvement. It’s about deliberately trying to effect beneficial change at a social level. But our digital fingerprint might also create trouble, as Andrew Keen argues in his book Digital Vertigo (2012). We are fools to sacrifice our privacy by putting everything on Twitter, Facebook and blogs, Keen says. He compares us to Jeremy Bentham, the 19th-century philosopher who wanted to be stuffed after his death and now sits on view in his glass case at University College London, for all to see, forever. Others similarly worry about the privacy risk within the QS movement, including some academics who attended last year’s QS conference in Palo Alto. Self-trackers, they think, are locking themselves into panoptical prisons through a convergence of narcissism, consumerist gadget love and conformist obedience to corporate monitoring. But what information have I shared in the big data cloud? Well, I’ve made public my lack of commitment to my allotment. I’ve counted the number of times I’ve spoken to my parents. And on 18 March I told other users of Lift (under the habit ‘Be grateful for something’) that I’m grateful for ‘Google hangouts’, believe it or not. How might current or prospective employers behave differently if depression was aggregated into some kind of assessment of a person’s mental health? These little bits of seemingly innocuous data from individuals can be aggregated across entire populations, with effects that are good and bad. I recently discovered a texting app that monitors the language we use and identifies changing patterns that might indicate conditions such as Alzheimer’s. Is that something you would welcome as useful, or would it make you paranoid? When individuals are seen only through the prism of statistics, they’re liable to feel misunderstood and mistreated. This is a grave matter. ‘In the aggregate, QSers generate data that makes large institutional data-collectors salivate,’ the ethnographers Dawn Nafus and Jamie Sherman of Intel Labs say in their draft paper on the quantified self movement, ‘This One Does Not Go Up to 11’. Every granule of health-related data contributes towards a bigger picture of society as a whole. That big picture might produce real benefits, helping to pinpoint causes and even cures, but what might change for each one of us, individually, if insurers got their hands on this information? How might current or prospective employers behave differently if things such as mood swings or depression were aggregated into some kind of assessment of a person’s mental health? What would ensue, if my Lift data was aggregated with other people’s? The government might start selling off (even more) allotments, under the mistaken impression that people have gone off growing their own food, when actually the problem is a shortage of time. Google might attain some nefarious advantage over me and other users of Hangouts. And my parents might start to notice patterns in my phone usage. They might complain if I fall short, or, just possibly, complain if I call them too often. Like any other form of self-improvement, QS is not about securing an ultimate fix. It’s about noticing how you live now, in fine detail, and moving playfully towards new ways of being — always fully aware that there will be more changes to make when you get there. Ironically, it might be this endless, rapid process of change that protects us from the dark forces of big data. ‘Evolving one’s tracking practices frustrates would-be big-data collectors,’ say Nafus and Sherman in their paper. ‘Creating an aggregate coherence from fragments of three-week tracking stints is far more difficult to do than from steadily collected longitudinal baselines.’ Rather than beat myself up for migrating, restlessly, from one app to another, I shall give myself a pat on the back each time I do it. In fact, more than that, I shall try somehow to consolidate it — make it a regular part of my life — as a very healthy habit indeed. | John-Paul Flintoff | https://aeon.co//essays/the-quantified-self-is-a-spirituality-for-our-times | |
Cosmology | Living in space was meant to be our next evolutionary step. What happened to the dream of the final frontier? | Back in the 1970s, my brother and I shared a cabin aboard a space cruiser. Dominated by a sturdy bunk bed, it was roughly four by four metres square with a porthole at one end and an airlock at the other. Our little cabin was wonderfully hermetic: it contained all necessary life support systems — a plastic bottle to pee in, rubberised garden gloves for spacewalks — and even advanced communications technology in the form of a chunky transistor radio, whose terrestrial signals we tried our best to ignore. Occasionally our cruiser morphed into a planetary outpost, its precise location varying according to star date. In winter, we gazed upon the snow-blown wastelands of the ice planet Hoth. In the heat of midsummer, our air-conditioned outpost sat coolly on the scorched plains of Tatooine, or a high, wind-blasted ridge on Vulcan. To be a child of the 1970s was to fantasise not merely about travelling in space but also about living there, permanently. This was the era of Salyut and Skylab, humanity’s first orbiting residences. In the space of a decade, humankind had progressed from the occasional, furtive dash beyond the blue to more extended stays in orbit. By the end of the 1970s, both Mir and the International Space Station (ISS), existed in prototype. Meanwhile, space agencies were busy planning more ambitious cosmic habitats, such as inflatable lunar cities and modular Martian towns. Millions of dollars were poured into planning and testing these and other futuristic space abodes. Forty years on, we find ourselves at a crossroads when it comes to living in space. Right now, there are four space stations in orbit: the ISS, whose six-person crew occupies the habitable volume of a five-bedroom family home; a tiny but growing Chinese space lab called Tiangong-1; and two un-crewed inflatable stations — each the size of a small caravan — owned and operated by the Nevada-based Bigelow Aerospace Corporation. Another Bigelow module might soon link up temporarily to the ISS — a small expansion of our habitable real estate in the vacuum. The business of getting people to and from orbital space is now largely routine, thanks in no small part to the retirement of the accident-prone Space Shuttle and a greater reliance on sturdier rocket-and-capsule technology. New entrepreneurial companies such as California’s SpaceX and XCOR are also bringing costs down by introducing market efficiencies to an industry historically driven by quasi-governmental sinecures. Many happy billions of dollars are there to be made in the human spaceflight business Space tourism, driven by ‘astropreneurs’ such as Virgin Galactic’s Richard Branson, will soon add hundreds of wealthy people to the astronaut ranks, but only for brief sojourns: they’ll reach suborbital space for a few quick minutes before returning to the atmosphere. Eventually, those high-paying tourists might want to stay awhile; Bigelow Aerospace has made no secret that its inflatables would be ideal for such a purpose, and the ISS has already hosted several tourists. Study upon study has indicated that many happy billions of dollars are there to be made in the human spaceflight business, which includes not just space labs, stations and hotels but also outposts on the moon and beyond. Space futurists — many of whom I count as friends — can finally, and with some measure of reality, lay claim to the idea that we are on the verge of fulfilling the philosophical promise of the Space Age and becoming what the SpaceX founder Elon Musk describes as ‘a multi-planet species’. Certainly, it has taken longer than they’d hoped: the pace of the Apollo years was unsustainable, being largely fuelled by the geopolitics of the Cold War, and space bureaucracies have been slow to take advantage of entrepreneurial efficiencies. Space futurists argue that things are changing. They insist that a new Space Age is dawning. But what if the signs they see are only the last wispy auroras of the first one? Whether launched for profit or pride, the ISS, Bigelow Aerospace and Chinese space stations are artifacts of a particular cultural moment, when living in space was thought to be the next step in humankind’s evolution. Space had become more than an ocean to traverse, pace Kennedy. It had become, in that iconic Star Trek phrase, ‘the final frontier’. I am as big a Star Trek fan as anyone, but I fear the frontier analogy misses the mark. On the frontier of old, one expected to find a better version of the world left behind: more land, more resources, more possibility. But the more we learn about ‘space’, the more we understand that living there would mean being forever enswathed in a portable bubble of Earth, with the goal being merely to survive. Even in the heady 1970s, my brother and I were keenly aware of this imperative: we spent most of our time aboard the space cruiser checking its equipment, its stocks of tubed food-paste and canisters of fuel, and of course air. Always and forever, air. The dream of living in space was more about Utopia than utility With our dependence on sat-nav, mobile phones, satellite-based weather prediction and other essentials of 21st-century life, one might conclude that our lives are more entangled with space than ever. But this dependency is rather different from the dream of living in space, which was more about Utopia than utility. While I would still happily squeeze myself into an orbiting tin can (or inflatable habitat), I remain in a minority whose size and demographic has not changed meaningfully since my boyhood. The bulk of humankind doesn’t see its future as inexorably linked to leaving the planet. Outside of hardcore space enthusiasts, we as a species feel more Earthbound than at any point since the earliest days of the Space Age. So what happened? Where did our collective childhood dreams of a life spent exploring the universe go? The imaginary cosmic outpost my brother and I shared was informed by very specific design constructs. We had a clear idea of its basic configuration: airlock, porthole, and a dazzling array of controls for weapons, robotics, and in-flight operations, which were made of Lego bricks, cardboard, and the odd spare coat button taped to a desk. There was also an understood aesthetic: it had to be tidy, nothing out of place, everything precisely to hand. Our interplanetary voyages were perhaps the only occasions that voluntarily moved us to clean our room. In space, no one wants to see your clutter. Haute cuisine on the ISS: a packet of apricot juice, a can of lamb with vegetables, shrink wrapped lasagna, bread and dried fruit. Photo courtesy NASAMost of these ideas about the architecture and aesthetics of space habitats were drip-fed via a symbiotic relationship between fiction and reality. They are still with us. Space habitats, whether free-floating or on planetary surfaces, are still portrayed as ascetic environments crammed with technology, their interior surfaces a landscape of pre-moulded plastic or metal. They are not spaces in which to luxuriate or play, but are functional to the point of spartan severity. This was not always the case. Writing in the early 20th century, the Russian scientist and space futurist Konstantin Tsiolkovsky described free-floating ‘space islands’ that were lush, self-sustaining communities not unlike a utopian collective farm. In the 1970s, the late American physicist Gerard O’Neill elaborated on this idea, proposing massive cylindrical colonies located at a gravitationally stable point between the Earth and the moon. Twenty miles long and five miles wide, O’Neill’s habitats would house thousands of people; constant rotation of the cylinders would provide Earth-like gravity. ‘With an abundance of food and clean electrical energy, controlled climates and temperate weather, living conditions in the colonies should be much more pleasant than in most places on Earth,’ O’Neill wrote in 1974. I had a chance to clamber inside a full-scale training mock-up of the Mir space station. The experience was like residing inside a computer terminal And yet, even in 1974, O’Neill was fighting a popular tide that had begun to battle such spacefaring optimism. Beyond the steady churn of futuristic imaginings, whether from O’Neill or NASA or science fiction writers such as Arthur C Clarke, the edifying promise of human space travel was quickly fading. The ‘space race’ might have been the most benign aspect of the Cold War, but the exploits of Yuri Gagarin, Neil Armstrong et al hardly resolved the conflict. Nor had space technology solved other Earthly challenges, as many had been led to believe it might (not least by NASA and its supporters). If anything, by the early 1970s, space-driven futurism and its quick-fix buoyancy seemed to magnify, even refract, the real world: its wars, its poverties, its growing environmental challenges. Within the course of a generation, the perception of living in space had swung wildly from Utopia to dystopia, and nowhere was this more evident than at the movies. The 1970s and early ’80s were arguably the golden era of science fiction filmmaking, and many of these films solidified in the popular conscience a darker view of life in space, a kind of cultural backlash against the shiny optimism of those earlier Space Age ideas. The Edenic space station in the 1972 film Silent Running isn’t an O’Neillian paradise but rather a grim ecological ark holding the last remnants of Earthly plant and animal life, the rest having been destroyed by human-made ecological carelessness. Sci-fi films of that era also changed our aesthetic perception of space living. Alien (1979) and Blade Runner (1982) popularised a visual idea that the filmmaker James Cameron calls ‘used future’, in which the interiors of spacecraft and outposts are dank, greasy, careworn and, unlike my bedroom cruiser, rather messy. Perceptions of human relationships in space changed, too: even before the acid-blooded aliens came along, it was a dull, bickering life aboard Alien’s spacecraft Nostromo, which looked less like a spaceship than an outsized auto-repair shop. The 1984 Peter Hyams film 2010, the follow-up to Stanley Kubrick’s gleamingly iconic 2001: A Space Odyssey (1968), was likewise gloomier in its visual and emotional palate. Far from enjoying Kubrick’s sleek Pan Am space cruisers and waltz-powered space wheels, Hyams’s travellers (those who’d survived: several had died in cryogenic suspension) inhabited a ship so dimly lit that it makes it a struggle for viewers to actually see what’s going on. There is also a pervading sense of loneliness in the film, and more focus on inter-crew conflict; 2010 traffics in far muddier notions about human nature and its place in the cosmic order than its predecessor. Ironically, our actual experiments in space living have largely reinforced this stark perspective. Real life in space is often cramped, unpleasant and even pointless. Some years back, I visited Star City near Moscow, the training centre for cosmonauts since Gagarin, where I had a chance to clamber inside a full-scale training mock-up of the Mir space station. The experience was more like residing inside a computer terminal than one of O’Neill’s cylindrical islands, so proximate and abundant were tubes, wires, levers, buttons and unnameable gadgets. More disorienting was the placement of controls and conveniences: because space was limited, these were distributed throughout the station without reference to Earthly gravity, thus making use of ‘ceilings’ as sleeping quarters, walls for toilet cubicles and virtually any other surface for any other activity. One could get used to such things (and you’d have to be a true cynic to tire of the view outside your window). But it’s a far, far cry from strolling the wide corridors of the Starship Enterprise. As with Mir before it, occupants of the International Space Station must undergo a battery of psychological tests to ensure they can get along without incident, given that they are crammed together for months on end in an isolated house they can’t leave (not unlike the TV contestants on Big Brother, surely an unsung living-in-space spin-off). The ISS is easily the roomiest extra-terrestrial dwelling yet built, but it offers a bare minimum of privacy: the NASA astronaut Sunita Williams described her sleep/work quarters on the ISS as being ‘like a little phone booth’. Most of what we have learned about living in space is that we should not live in space. We are designed for gravity; without it, strange things happen to both body and mind. For each month spent in space, humans can lose up to two per cent of their bone mass. This means that each day, for hours on end, the ISS becomes the world’s highest-flying gym to keep its occupants fit. But even with such precautions, some returning space travellers require months of rehabilitation to readjust to life on Earth. Others, despite having access to the best facilities and treatments available, experience headaches, sight loss, and undiagnosed physical and psychological frailty for the rest of their lives. But these are mere hardships, not showstoppers, and those who’ve pioneered at the edges of human experience have always managed to endure them. Physiological challenges aside, life aboard the ISS is not unlike life on a submarine or in an Antarctic research station: isolated, cramped, and relentlessly task-focused. ‘But,’ the space futurist will say, ‘who is to say these limitations are permanent?’ After all, we might one day be able to create artificial gravity, which would significantly minimise the damage done to the human body in space. We might one day be able to build, launch and populate some version of the floating paradise envisioned by Tsiolkovsky and O’Neill, giving us greenery and companionship in space — and some measure of Earthly elbow room. ‘One day’ is the sustaining trope of today’s astropreneurs, and it is mother’s milk to the clever engineers and researchers at NASA and the European Space Agency, who continue to churn out studies and CGI animations pushing, ever pushing, for a humans-in-space future. One day, anything is possible: science and science fiction, hand in hand, have conspired to make us believe this is true. One day, living in space might be as easy as living on Earth. But will it matter to anyone? That we might be able to live in space does not mean that we still want to, or that the arguments put forward for doing so will still resonate across the cultural landscape. Indeed, a closer look at the four space stations now in orbit reveals that the living-in-space dream is, in fact, in serious trouble. Space, as we understand it, is tabula rasa in its purest form: no life to trample upon, no natives to displace, no border disputes to wrangle over No amount of spin can mask the incredible expense of the International Space Station, which has thus far cost an estimated $150 billion to build and operate. For that price, NASA could build, launch and operate several dozen Mars Curiosity rovers. The station’s scientific value is routinely criticised as being paltry, particularly when compared with other high-end science projects such the Large Hadron Collider, which was built for about $10 billion, less than a tenth of the price of the ISS. The ISS is routinely promoted as a stellar example of cross-cultural collaboration, but it’s unclear whether the multi-national consortium that runs it will keep it operating past 2020. China’s ultimate aims for its spaceflight programme are the subject of constant speculation. The country has lately pursued an ambitious manned space programme, but only because it understands that manned spaceflight is a status marker among superpowers. How long will it be before other ‘status markers’ of China’s global rise — its mega-cities, its growing military machine, its tourism, the growing ‘soft power’ of corporate tech brands such as Huawei — supersede the importance of human spaceflight? That leaves the entrepreneurs. Along with SpaceX and Virgin Galactic, Bigelow Aerospace is one of the forerunners of the entrepreneurial space sector, upon which off-world enthusiasts pin their hopes. The American hotel magnate Robert Bigelow has so far bankrolled the company with a reported $250 million, and he is on record as committing to another quarter-billion through 2015. At that point, presumably, he hopes the company will begin turning a profit. In 2007, around the time it launched its second unmanned station, Bigelow Aerospace boasted that as many 800 paying crewmembers could be flying in 10 years’ time. Six years later, the number of Bigelow astronauts remains zero. Bigelow has launched ambitious self-funded enterprises before, but he is also known for being capricious with them. In 2004 he pulled the plug on the National Institute for Discovery Science, which funded paranormal research. Bigelow’s pockets might be deep, but no one can fund a business endlessly without any paying customers. The Space Age dream of extra-terrestrial humanity is at its heart a tautology: we will expand into space because we will — ‘one day’ — be able to expand into space. Yet it is easy to find useful analogies to the contrary. In the 1960s and ’70s there were more than a dozen human habitats — occasionally dubbed ‘inner space’ stations — scattered beneath the world’s oceans. The most famous was Tektite II, launched in 1970 into Great Lameshur Bay in the US Virgin Islands, with an all-female crew of scientists led by the American oceanographer Sylvia Earle. Like the drive to move into space, sea colonisation was once seen as an essential step in humanity’s future. The undersea world is a better fit for the classic frontier ideal: untapped resources, an abundance of uninhabited space (uninhabited by humans, at least), and with most required habitation technologies fairly well-established by decades of submariners. But by the 1980s the undersea colonisation movement had largely withered away. The French aquatic explorer Jacques Cousteau, who was instrumental in establishing one of the earliest undersea stations, Conshelf off the coast of Marseilles in 1962, was also among the first to disassociate himself from the colonisation movement, which he said was contrary to the real need for human intervention: conservation. Tektite’s Earle, now explorer-in-residence at National Geographic, travelled much the same trajectory. There are just three operational undersea stations left: two are used for oceanographic research off the coasts of Rhode Island and Florida, while the third is a privately owned underwater hotel in Dubai. The sea/space analogy isn’t perfect. Space, as we understand it, is tabula rasa in its purest form: no life to trample upon, no natives to displace, no border disputes to wrangle over. Taken on more generous terms, our push to colonise the seas or space might be viewed as a natural expression of the human need to expand and explore. But we are now more self-conscious, and less hubristic, about what such expansion might bring. The more we understand our own impact on Earth, the less we seem inclined to inflict it elsewhere. There might be arguments for living in space that resonate more fully with the concerns of our time. The explosion of a meteor over Russia this February has placed new emphasis on the early detection of stellar objects that threaten to collide with Earth: could human-tended stations or bases be part of the solution? It is unfortunate that suborbital civilian spaceflight has thus far been branded with the ‘tourism’ label, which diminishes its potential to embrace a wider audience. As it stands, civilian spaceflight is largely perceived as what wealthy individuals do when they’ve climbed Everest and want a bigger trophy. It’s understandable that companies such as Virgin Galactic and XCOR want to monetise their work, but they also need to demonstrate the potential for suborbital spaceflight to be meaningful, which means opening up a few seats for the hoi polloi. Transitioning human spaceflight from a military elite to a wealthy elite would hardly be progress. There might also be resonance in applying the technological and psychological lessons of living in space to the challenges of living on an increasingly crowded Earth. The potential synergy between the design requirements of close living in space and close living on Earth could use more attention. Space Age buildings such as the Nakagin Capsule Tower in Tokyo — built in 1972, and so retro-futuristic as to appear to be CGI — are more about form than function. Perhaps architects and urban planners should be offered berths aboard the ISS or Bigelow’s inflatable habitats. Our drive to live in space has to serve human needs, not human fantasies. Since the days of my childhood space cruiser we have become, by and large, a more self-aware species. Maybe that’s a necessary first step towards a meaningful spacefaring future. Step two will be harder. The next Space Age will require more than humane starships and flashy technologies — more than roomier bunk beds and better rocket fuels. It will take new ideas that are compelling enough to convince us that this wonderful planet of ours isn’t the endpoint of human evolution, but just the beginning. | Greg Klerkx | https://aeon.co//essays/space-habitat-for-humanity-or-backdrop-to-fading-fantasies | |
History | Even in the decade of dissent, Thomas Szasz stood alone when he attacked the idea of madness from the political Right | In 1961, a young psychiatrist initiated a one-man insurgency against his own profession. ‘Psychiatry is conventionally defined as a medical specialty concerned with the diagnosis and treatment of mental diseases,’ he wrote. ‘I submit that this definition, which is still widely accepted, places psychiatry in the company of alchemy and astrology and commits it to the category of pseudoscience. The reason for this is that there is no such thing as “mental illness”.’ Fifty years after his book The Myth of Mental Illness: Foundations of a Theory of Personal Conduct first ventured this uncompromising view, its author Thomas Szasz visited Cornell University in upstate New York. He was there to speak to an audience of students, many of them coerced or bribed by their professors to attend, plus a few local lawyers and psychiatrists. His subject was ‘The Insanity Defence: The Case for Abolition’. The talk started late because a man in a wheelchair was being positioned near the front of the lecture hall. Szasz greeted him enthusiastically; the audience would later learn that he was Ronald Leifer, a psychiatrist who had been denied tenure at the Upstate Medical Center at Syracuse in 1966 for defending Szasz and his iconoclastic ideas against practically the whole of the psychiatric profession. When it finally started, the lecture was heavily anecdotal and lasted barely half an hour. The 91-year-old psychiatrist spoke in a quiet voice and with a thick Hungarian accent. Students shifted in their seats. Then came the Q&A. Although the subject was the insanity defence, the audience was more interested in Szasz’s assertion that there was no such thing as mental illness. ‘What about schizophrenia?’ ‘How can you be a practising psychiatrist if you don’t believe in mental illness?’ One student asked him: ‘Are you trying to say we all have different brains?’ The lecturer seemed unsteady on his feet. ‘Yes,’ he replied, ‘we do.’ Another student put it to him that we might be determined by our neurological make-up. ‘I think you and I have different brains,’ Szasz replied. That got a laugh from the audience. It was clear that being the only one in the room with a brain like his was part of his persona; being contrarian was his way of being right. Throughout his career, even friendly co-optation irked him. When scholars started associating him with the anti-psychiatry movement, he wrote a book entitled Antipsychiatry: Quackery Squared (2009). The psychiatrist inherited from the Inquisition the task of quarantining society’s dangerous elements Szasz liked to present himself as a dissident. And yet, when he began dynamiting the foundations of psychiatry in the 1960s, rebellion was in vogue, and he seemed very much a man of his time. Along with so many other radicals of the decade of dissent who got half of what they wished for, he has largely been forgotten, his troubling declarations defused by decades over which he worked as an academic and a practising psychiatrist. After the talk at Cornell, he confided over a stiff drink that he generally did not give talks anymore. ‘I’m too old,’ he told me. ‘Plus, not many people know I’m still alive.’ Indeed, not long after our conversation, Szasz died, last fall. But did his ideas die with him? On the contrary, it might be that the world has only recently come around to his way of thinking. Near Szasz’s school in Budapest there stood a statue of Ignaz Semmelweis, a Hungarian obstetrician who found posthumous fame as a 19th-century martyr of science. To Szasz, the sickly and discontented young son of a Jewish businessman, Semmelweis became something of a hero. The late doctor’s claim to fame had been the discovery that it was possible to practically eliminate the often-fatal ‘childbed fever’ common among new mothers in hospitals if doctors simply washed their hands before assisting with childbirth — especially if they had just been performing autopsies. When his findings became more widely known in the 1840s, he expected a revolution in hospital hygiene. It didn’t come, and Semmelweis grew increasingly outspoken and hostile towards doctors who refused to acknowledge his discovery. Vitriolic academic exchanges ensued, and he was eventually lured to a mental hospital where his opponents had arranged for his incarceration. He was beaten severely and put in a straitjacket. He died within two weeks. Echoing Voltaire, Szasz recalled the doctor’s tragic life in an autobiographical sketch in 2004: It taught me, at an early age, the lesson that it can be dangerous to be wrong, but, to be right, when society regards the majority’s falsehood as truth, could be fatal. This principle is especially true with respect to false truths that form an important part of an entire society’s belief system. In the past, such basic false truths were religious in nature. In the modern world, they are medical and political in nature.Szasz was still a teenager when his Jewish family left Hungary, and just preparing for college when they settled in the US in 1938. He later confessed that his knowledge of America prior to his arrival was sketchy, and largely based on reading The Adventures of Tom Sawyer (1876) by Mark Twain. He had heard the ‘usual tales’ about ‘the land of movies, money, and the mistreatment of blacks’. When he enrolled in the University of Cincinnati in the winter of 1939, he discovered that discrimination against Jews, ‘not to mention blacks and women’, was ‘perhaps even more intense’ than it had been in Hungary. Though he earned a degree in medicine, Szasz was much more interested in politics and philosophy. He chose training in psychoanalysis in Chicago, then a centre of the psychoanalytic craze, over a career as a medical doctor. Demonstrating textbook psychoanalytic ambivalence, he was simultaneously attracted and repelled by the prevailing image of psychoanalysts as the elect. In the same autobiographical sketch from 2004, published as part of the collection Szasz Under Fire: The Psychiatric Abolitionist Faces his Critics, edited by Jeffrey Schaler, he recalls: The analysts passionately believed that they were treating real diseases, never voiced objections against psychiatric coercions, and believed that criminals were mentally ill and ought to be treated, not punished. These beliefs were an integral part of their self-perception as members of an avant-garde of scientific, liberal intellectuals.His fellow psychoanalysts, with their ‘Left-liberal “progressive” prejudices’, fanatically denounced Republicans as ‘either fascists or sick or both’. As a practising psychoanalyst, an academic psychiatrist (with tenure) and a Right-wing libertarian, Szasz felt he belonged to an embattled minority, an elect of a different sort. It was the ideal position from which to deliver his dissident strike. It came in 1961 with the publication of The Myth of Mental Illness, wherein Szasz asserted that psychiatry, unlike medicine, could demonstrate no physical basis for the ‘diseases’ it identified and ‘treated’. ‘To speak of elevated blood pressure and hypertension,’ he wrote, ‘of sugar in the urine and diabetes, all as “organic symptoms”, and to place them in the same category as hysterical pains and paralyses is a misuse of language; it is nonsensical.’ Masquerading as scientists, psychiatrists abused scientific concepts and deluded their patients. Worse still, they acted as henchmen for society and state. ‘[T]herapeutic interventions have two faces,’ Szasz wrote; ‘one is to heal the sick, the other is to control the wicked’. Yet the standard for wickedness is always subjective and variable, and so the psychiatrist inherited from the Inquisition the task of quarantining society’s dangerous elements. It was not a coincidence that, even decades after the word ‘psychiatrist’ entered English in 1890, practitioners were often called ‘alienists’, derived from the French aliéné, meaning both ‘alienated’ and ‘insane’. First, Szasz wrote, it was ‘God and the priests’ who kept the unruly in check. Then came ‘the totalitarian leader and his apologists’, along with ‘Freud and the psychoanalysts’. Dr Thomas Szasz pictured at his 90th birthday seminar in London. Photo by JennyphotosThe most enthusiastic readers of The Myth of Mental Illness did not share — or even know about — Szasz’s Right-wing leanings, which are not evident in the book. As one critic, R E Kendell, the late president of the Royal College of Psychiatrists, has pointed out, his early devotees were often Left-wing students eager to overthrow established dogma across the board. Another of Szasz’s critics, the Harvard Medical School psychiatrist Thomas Gutheil, called him ‘a ’60s kind of guy’ and ‘an anti-establishment rebel’. Szasz certainly wasn’t alone in seeing a sinister force behind diagnoses of insanity. There seems to have been something in the air in 1961: a few months after his book came out, Ken Kesey’s novel One Flew Over the Cuckoo’s Nest (1962) introduced the popular imagination to the iconic nightmare of Nurse Ratched, a character whose narcotic soft power could transform the socially marginal into the terminally insane and literally lobotomise dissent. ‘Total institutions’ are the theme of the Canadian sociologist Erving Goffman’s autobiographical collection, Asylums: Essays on the Social Situation of Mental Patients and Other Inmates (1961). In Goffman’s analysis, mental hospitals were places in which incarcerated individuals were ‘systematically, if often unintentionally mortified’, generally becoming ‘cooperators’; ‘normal’, ‘programmed’, or ‘built-in’ members. It was in 1961 that the French historian Michel Foucault published Madness and Civilization. Foucault, coming from the Left, concluded in eerie harmony with Szasz that language was behind the partition of the ‘sane’ from the ‘insane’. ‘[T]he language of psychiatry … is a monologue of reason about madness.’ Also in 1961, Frantz Fanon, a psychiatrist working in Algeria during the Franco-Algerian War, wrote The Wretched of the Earth, condemning the psychiatric profession for using the language of medicine to label African resistance to colonialism as a kind of mental illness. ‘Psychiatry is a threat to civil liberties, especially to the liberties of individuals stigmatised as “Right-wingers” ’ Within this 1961 consensus, Szasz was conspicuously alone in mounting the barricades from the Right. But he and his new allies were soon to part ways. In 1962, Major General Edwin Walker was charged with ‘inciting, assisting, and engaging in an insurrection against the authority of the United States’ for calling on residents of Mississippi to rise up and oppose the admission of a black student into an all-white college. Walker believed, among other things, that communists had infiltrated the US military (if this sounds familiar, it might be because Walker was a model for General Jack D Ripper in Stanley Kubrick’s 1964 film, Dr Strangelove). Instead of facing a military hearing, Walker was flown for examination to the US Medical Center for Federal Prisoners in Missouri. A government psychiatrist concluded, based on reports of Walker’s behaviour, that he was probably mentally disturbed. Szasz protested the decision, and Walker was allowed to go free. Writing about the Walker case in 2009, Szasz contended that the state’s attempt to pathologise the major general as a ‘racist’ bore comparison with the pathologisation of escaped slaves in the 19th century: Before the Civil War, proslavery physicians in the South diagnosed black slaves who tried to escape to the North as mentally ill, ‘suffering from drapetomania’. In the Walker case, pro-integration psychiatrists in the North diagnosed white segregationists as mentally ill, ‘suffering from racism’.After Walker, Szasz took up the cause of a high-profile Republican. In the run-up to the 1964 presidential elections, Fact magazine published ‘The Unconscious of a Conservative: A Special Issue on the Mind of Barry Goldwater’, which contained the results of an informal survey of psychiatrists on the mental competence of the Republican candidate. More than 1,000 respondents declared him ‘psychologically unfit to be president of the United States’, and several offered a diagnosis of paranoid schizophrenia. Szasz was not among them. In the psychological marginalisation of Walker and Goldwater, he saw a trend towards the pathologisation of the Right in general. The following year he declared that ‘psychiatry is a threat to civil liberties, especially to the liberties of individuals stigmatised as “Right-wingers”.’ If those on the Left focused on how the diagnosis of insanity was being used to marginalise unpopular voices, Szasz insisted the most unpopular voices were to be found, not in the slums or the colonies, but among US conservatives. And yet, when Szasz chronicled the history of ideological quarantine, his own earliest examples tended to feature conservative henchmen. There was the German physician Carl Theodor Groddeck, who in 1849 wrote and published an MD thesis titled De morbo democratico, nova insaniae forma (On the Democratic Disease, A New Form of Insanity). Groddeck’s thesis warned of a democratic epidemic that might destroy all ‘individual self-consciousness’. Szasz also praised the American socialist writer Jack London, whose 1908 novel The Iron Heel raged against the ‘social role of institutional psychiatry’ in segregating and neutralising Leftist opposition. To Szasz, the book was ‘at once perceptive and prophetic’. But it prophesied not the later persecution of the Left so much as ‘the tyrannies that were yet to come — in Russia and Germany’: When such bureaucratic and totalitarian principles and methods are applied to mental health planning and organisation — as indeed they are both in England and the United States — the psychiatric physician emerges as a political evangelist, social activist, and medical despot. His role is to protect the state from the troublesome citizen. All means necessary to achieve this are justified by the loftiness of this aim. The situation in Germany under Hitler offers us a picture — horrible or idyllic depending on our values — of the ensuing political tyranny concealed behind an imagery of illness, and justified by a rhetoric of therapy.Such was the bridge Szasz constructed between Jack London’s socialism and his own thinking. Both men occupied an unpopular and embattled opposition, both spoke for the marginalised, and both pointed to a truth concealed by institutional authority. Szasz had no use for the gulf between London’s politics and his own, so he ignored it. Right and Left needn’t bear any relation to right and wrong. Szasz arrived at this separation of politics from morality in part by dismantling the justification for the insanity defence. In 1843, a Scottish radical called Daniel M’Naghten shot Edward Drummond, mistaking the private secretary of Britain’s prime minister Sir Robert Peel for Peel himself. In the course of the trial, M’Naghten was found ‘not guilty by reason of insanity’, and confined to an asylum for the criminally insane. Owing to the high profile of his intended target, this verdict proved unpopular. A select panel of English judges gathered to opine on the legal application of the insanity defence, and their responses were codified as the M’Naghten Rules, which have since set the terms for the insanity plea in judiciaries around the world. Their crux is that, to establish a defence on the ground of insanity, it must be proven that the accused was ‘labouring under such a defect of reason … as not to know the nature and quality of the act he was doing; or, if he did know it, that he did not know he was doing what was wrong’. ‘France was the name of a country. We should take care that in 1961 it does not become the name of a nervous disease’ For Szasz, the M’Naghten Rules failed to acknowledge all the many different circumstances that could impair a person’s capacity to tell right from wrong. They made no distinction between the influences of congenital idiocy, drunkenness and, most importantly for Szasz, ideology. As he wrote in his book Law, Liberty, and Psychiatry (1963): ‘The socioeconomic, political, and ethical implications of deviant behaviour were obscured in favour of its so-called medical causes.’ Thus an act like M’Naghten’s lost all political meaning simply because it was deemed to have been committed by a madman. Given this history of medical persecution, the insanity defence was one area where Szasz and Leftist-progressives of the time could agree on the terms of engagement. Both sides argued it was unfair to call political opponents crazy (although both also did so regularly), and both sides asserted that their own ‘unpopular politics’ had a right to a hearing and a special moral status precisely because they were on the embattled fringe. Yet Szasz’s unease with the insanity label went beyond its propensity to classify political opposition as madness. In line with his distinctively conservative perspective, he also feared that it removed responsibility from criminal acts. Unpopular politics should literally have their day in court, and this meant talking about a defendant’s motives (political or otherwise), as well as punishing them for crimes they had committed. This was not a matter of science, but of morality. The same year that The Myth of Mental Illness appeared, the logistical mastermind behind the Holocaust of Hungarian Jewry, Adolf Eichmann, was put on trial in Jerusalem. Eichmann’s case brought the very concept of criminal guilt into question in a new way. In her 1963 book on the trial, the philosopher Hannah Arendt insisted that Eichmann, far from being a monster or a ‘Bluebeard in the dock’, was, in fact, ‘terribly and terrifyingly normal’. She expressed unease with the idea that intent to do wrong is necessary for the commission of a crime. ‘Where this intent is absent,’ she wrote, ‘where, for whatever reasons, even reasons of moral insanity, the ability to distinguish between right and wrong is impaired, we feel no crime has been committed.’ Eichmann was operating in a society that did not merely accept but actively encouraged the killing of Jews, so it was ‘not his fanaticism but his very conscience that prompted Eichmann to adopt his uncompromising attitude’. Nevertheless, Arendt insisted, he was guilty, and his very normality was part of his guilt. The Left raised no objection to Eichmann being considered ‘normal’, because ‘normal’ was just what the decade of dissent despised most. Jean-Paul Sartre, in the preface to Fanon’s The Wretched of the Earth, argued that everyone who lived within the system was guilty of participating in it. Naturally, he used the insanity label to make the case: Fanon reminds us that not so very long ago, a congress of psychiatrists was distressed by the criminal propensities of the native population. ‘Those people kill each other,’ they said, ‘that isn’t normal’ … These learnt men would do well today to follow up their investigations in Europe, and particularly with regard to the French … since our patriots do quite a bit of assassinating of their fellow-countrymen … In other days France was the name of a country. We should take care that in 1961 it does not become the name of a nervous disease.Under the pressure of this two-flanked attack on normality, it’s little wonder that the political spectrum seemed to converge just as the ethical polarity between ‘normal’ and ‘insane’ was reversed. Arendt’s theory of totalitarianism encompassed both extreme ends of the political spectrum; in Hitler’s Germany and Stalin’s USSR alike, she saw a ‘novel form of government’ whose values were ‘radically different from all others’. This made Eichmann a new type of criminal, one who ‘commits his crimes under circumstances that make it well-nigh impossible for him to know or feel that he is doing wrong’. As such, the M’Naghten Rules simply did not apply. All that the insanity label achieved was to excuse the new criminal and quarantine the dissident. Nevertheless, in seeking to discredit the insanity defence in order to preserve morality, perhaps Szasz and Arendt both came unmoored from the traditional political spectrum altogether. This might explain why Szasz’s view of mental illness as a myth was shared by many on the Left. As for Arendt, when in 1972 the American political scientist Hans Morgenthau asked about her politics — ‘What are you? Are you a conservative? Are you a liberal?’ — she replied: ‘You know the Left think that I am conservative, and the conservatives sometimes think I am Left or I am a maverick or god knows what. And I must say I couldn’t care less.’ In some ways, the spirit of 1961-style iconoclasm around the insanity label might seem very distant now. Certainly the medicalisation of ‘abnormal’ behaviours continues, to the extent that Szasz’s repeated insistence that ‘ADHD is not a disease’ did nothing to slow the persistent increase in diagnoses. Still, there are signs that we have come to share his moral discomfort with the judicial notion of insanity. On 22 July 2011, Anders Behring Breivik killed 77 people at a government building and Left-wing youth camp near Oslo in Norway. Before long, a morbidly fascinated worldwide audience was scouring the 1,500-page manifesto he had posted online, finding citations from a Leftist literary critic, a Nobel Prize-winning Holocaust survivor, Vlad the Impaler, and the Unabomber, among others. Was Breivik’s deed political or merely mad? This question became central to the legal case against him. An initial psychiatric panel diagnosed him with paranoid schizophrenia. After the results of the first evaluation were announced, the killer, as well as many of his victims and their families, cried foul. In a long letter addressed to the Norwegian media, Breivik — an unapologetic exponent of the extreme Right who greeted the court with a raised fist in fascist salute — wrote: ‘To send a political activist to a mental hospital is more sadistic and more evil than killing him! It is a fate worse than death.’ Meanwhile, 56 of the victims and their families complained that if Breivik were declared insane, it would mean that he was not responsible for his crimes. In that moment, Right and Left converged on the claim that applying the insanity label would amount to a miscarriage of justice. Both sides were satisfied when a second evaluation declared him sane. In the words of Tore Sinding Beddekal, one survivor of the shootings: ‘I am relieved to see this verdict. The temptation for people to fob him off as a madman has gone.’ On 8 September 2012, barely two weeks after Breivik was sentenced to 21 years in prison, the death of Thomas Szasz placed a peculiar bracket on an era of dissent. Though several decades have passed since he first called mental illness a myth, our world is still very much under the influence of his time, when Right and Left sought to eliminate insanity in order to lionise dissent, legitimise the marginal and condemn the new normal. Few other issues show a convergence of Right and Left so far-reaching, while still allowing both sides to adhere to their politics and maintain a sense of total opposition. A hero is born for one side at the same moment that the axe of justice falls for the other, and so it seems that everybody wins. But it might also be that something has been lost. Cosseted by such a firm consensus, could we even recognise true dissent if we saw it? Correction, 14 May 2013: the original version of this article stated that Szasz was a Republican. The more accurate designation is Right-wing libertarian. | Holly Case | https://aeon.co//essays/the-psychiatrist-who-didn-t-believe-in-mental-illness | |
Sleep and dreams | Insomnia brings many gifts — the noises of the night, the twist of narrative, and a stolen march on time | Insomniacs rarely forget when their sleeplessness first began. For some, like the writer Margaret Drabble, it emerges with the birth of a child; for others, it’s an episode of illness or the loss of someone they love. For me, it starts at an uncle’s house in Essex. I am four or five and my uncle is telling me wonderful bedtime tales of adventures in far flung parts, narrative distractions, or perhaps rewards, designed to see off any protests against what his hands are doing under the sheets. Sleep eludes me then not only because I am disconcerted and a little afraid, but also because the stories are too good to give up. Anxious in the dark, I would check for monsters. I sleepwalked. I began to tell stories of my own, and a gradual transformation happened. My stories started to act as guardians against the terrors of the night: so long as I was spinning stories, the dark was dumb and powerless, the monsters had no hold over me. Over the years, narrative became my night-time ally. Sleeplessness and storytelling became woven together in my mind so intricately that today the two are almost inseparable. For me and many other insomniacs sleep is a sort of necessary evil. We are aware that we are treading on delicate ground, that our failure to sleep might any day cross over into a failure to thrive. Insomnia, we know, increases the risk of heart attacks, strokes and mental illness. Still, insomniacs remain too attached to the narrative of the day to want to let it go. Wakefulness is too enchanting, a tableau vivant fabulously embroidered with incident, plot and character. So long as one is awake, the narrative remains open, the threads of life continue to spin in more or less limitless combinations. Being awake at night affords the insomniac the power of reimagining the day. Without the intervention of sleep, we can ignore the intimations of mortality brought nightly as the sun sets. And though it’s true that a lack of sleep can give rise to feelings of frustration and occasionally despair, insomnia is also, in its strange way, a hopeful condition, ripe with the possibility of other endings. Why shut the door on the light when our days are both fragile and finite? The facts, as sleep scientists are increasingly discovering, are that sleep is not a shadow of wakefulness, but an independent state, busy with its own mental and physical events. The brain, we now know, uses as many calories for its processing activities by night as by day, even if those calories are put to different uses. But the events of sleep are ones that exclude our conscious selves, so they can feel as though they do not exist at all. Closing the narrative of the day by sleeping seems to stop time. It is, like sleep itself, a little death. And there lies the rub. Whatever the reality, sleep feels like a state of non-existence. And that is something many insomniacs cannot abide. A friend recently suggested I download a mobile phone app, which purports to track brain patterns in sleep. Insomniacs tend (often unwittingly) to exaggerate their periods of wakefulness, and the friend thought it would be useful for me to know exactly how much sleep I actually get as opposed to how little I think I do. Though I appreciated her concern, the suggestion was one only a good sleeper would ever make. Apart from drawing too much attention to sleep, thus making it even more elusive, a sleep map would seem to offer a specious guide to a terra nullius whose topography might well be sketched in by science, but whose more subtle landscapes must forever remain undefined. If we were ever conscious enough to register it, we would not, by definition, be fully asleep. There’s a freedom to the night, an unconstrained permissiveness. Under cover of darkness, anything goes Though sleep is not a story, dreams might be. Rather more often, though, they are hints at storytelling, fragments of narrative that can, even at the time of dreaming, feel random or perhaps simply experimental. As signposts to the otherwise unknowable realm of the unconscious, dreams are instructive and useful to me as a writer but, even so, they hold none of the seductive power of being awake. Even dreams about sex, intensely pleasurable though they can be, rarely hold a candle to the real thing. That said, I’ve had to learn to view my sleeplessness as a gift, and I’m still often unable to experience it that way. Looking back, I can see that my adult life has been ordered around my insomnia and, in some respects, limited by it. My early adult life was dominated by an unwelcome familiarity with sleep clinics and their often vaguely totalitarian-sounding ‘sleep hygiene’ routines, as well as with visits to therapists, meditation rooms and prescription-friendly GPs. The frustrations of failing at something so basic are hardly tempered by the humbling, almost daily confirmation of the limits of my ability to control my own body. Sleep is an elemental human function at which I am, to say the least, inexpert. Being self-employed allows me to escape from the daily alarm call, but it also means that I have had to adapt to living with financial insecurity. Not having children has relieved me from what was — at the time I was considering motherhood — the terrorising prospect of having to deal with someone else’s sleepless nights as well as my own. And so I remain childless. Relationships have sometimes been blighted, at least in their early stages, by the months it takes for me to accustom myself to having someone beside me in the bed. And then there’s the thumping head and the numbing mental fizz of the day following a sleepless night, which so often coincides with a long journey, an important interview or a deadline of some kind or other, and leads to the inevitable sense that one is never at one’s best when it is most required. But for the most part, I have accommodated to living with my disorder. Acceptance, or perhaps just age, has altered our relationship. The battle of mutually assured destruction it once was has significantly mellowed. We have both stopped wanting to change one another and, like any long-paired couple, we’re now fully, if not always comfortably, adjusted to our life together. Which is just as well given that, whether we like it or not, fate has made us companions for life. For me, insomnia’s greatest gift is the uninterrupted time and mental space it allows for reading and thinking. There’s a freedom to the night, an unconstrained permissiveness. Under cover of darkness, anything goes. Being awake in the night feels like stealing a march on time. Senses sharpen, so does the memory. The air stills and it is as though you have passed into some other, more magical dimension in which earthly rules no longer apply. There’s an exploratory feeling to the night, a special magic, as anyone who regularly stays awake through it knows. The night’s sounds, smells and sights are exclusive. The quiet lends itself to brooding, even to epiphany, at the very least to an intense focus, what Seamus Heaney calls ‘the trance’ which can be both alluring and, for creativity, highly fruitful. My body keeps me awake until my mind has sculpted something more shapely from the day And so I think and I read. Sometimes I get up and make toast but generally I find it most pleasurable and productive to drift in the still waters between wake and sleep, neither fully alert, nor exactly dozy. For me, the right kind of story, fiction or non-fiction, holds a real promise of sleep. At least, a firm literary resolution can mimic a narrative closure that I’m reluctant to concede in real life. I look back at day’s end, reconstructing the events of the previous 12 hours as narrative. By the time I get into my bed, the day has acquired a fully formed shape that it might have lacked as it went along. Experience has taught me that the less satisfying the narrative of the day has been, the more likely I am to be unable to let it go. My body keeps me awake until my mind has sculpted something more shapely from the day or I am able to distract it with a more engaging narrative borrowed from the pages of a book. The vigilance of the waking hours is over but so is the magic lantern of event. No matter how glad I am to sink into unconsciousness, this always feels like a loss. Children, who are in so many ways more fully alive than us, understand this intuitively, I think. It’s a rare child who volunteers to go to bed: they too need stories, borrowed narratives to persuade them away from the thrills and puzzles of their own day. There have been numerous experiments in which rats have been deprived of sleep. At first, sleeplessness makes them eat more, but within a few days they start to fade and die. Exactly why remains something of a mystery to scientists, though to insomniacs perhaps less so. A little bit of wakefulness goes a long way. The all-night vigil, less frequent in my life than it once was, can be numbingly, despairingly lonely, the hours endless and cheerless. At those times the sound of the first plane of the morning or the tell-tale shaking of the bed announcing the day’s first Tube train, can seem like the peal of celebratory bells. And yet the seven- or eight-hour night, which even the best sleepers among us increasingly regard as unattainable, is nothing more than an invention of the industrial revolution. We never used to sleep this way. Two or three hundred years ago, you would go to bed for four hours directly after sundown, rise again for a period of two or three hours to write, read, pray or have sex, then settle down again for what was then known as ‘the second sleep’. In his survey of sleep, At Day’s Close (2005), the American historian Roger Ekirch finds more than 500 literary references to the second sleep from sources as diverse as Homer, Cervantes and Dickens. It appears that this humane and creative arrangement began to be phased out in the late 17th century by the demands of the workplace and the exigencies of modern life. To return to it would be neither practical for most people nor, possibly, even desirable. All the same, I can’t help feeling that we’re losing out. A few hours’ sleep followed by a few hours’ wakefulness, a whole night-time of remembered dreams and magic lantern shows, and the time and mental space between in which to record them. Imagine the stories we could tell in the morning. | Melanie McGrath | https://aeon.co//essays/can-t-sleep-then-let-me-tell-you-a-story | |
Cities | Most of us now live in cities, so it is within the metropolis that our future salvation or death warrant will be drafted | Earlier this spring, I found myself walking through Lower Manhattan. There was still a chill in the air; a few flakes of snow had fallen the night before but nothing had settled. As I wandered, I was interested to see whether there were any residual signs of Hurricane Sandy, which had struck the island on 29 October last year, battering the Five Boroughs remorselessly and serving as unnerving reminder of the fragility of the city. That October night, as the waters rose and started to fill Lower Manhattan, there was proof that not even Wall Street was beyond the reach of nature. And as the lights went out — and stayed out for days afterwards — there was a strange feeling of the possibilities of this happening more often, anywhere in the world. Elsewhere in the darkened city, the infrastructure was down. The subway system was flooded. Cars picked up by the rising waters were scattered and dumped like flotsam. In Staten Island, whole neighbourhoods were flattened, while, in Queens, an exploding generator set a fire raging across several blocks. To add to the physical damage were stories of human woe: 300 patients from New York University’s Langone Medical Center had to be sent outdoors when high winds knocked out the emergency power. In Staten Island, a young policeman drowned after leading his family of six adults and a 15-month-old baby to safety, and two boys aged two and four died after being ripped from the arms of their mother, Glenda Moore, as she tried to flee the rising waters in her car. She’d gone in search for help, but nearby residents had refused to open their doors. Hurricane Sandy raises big questions about how we will live in the future. A city mayor, even one as optimistic as New York’s Michael Bloomberg, cannot stand up amid the wreckage and tell people that everything is fine, that it will never happen again. Today we face an ecological challenge of our own making: now that the majority of us live in cities, it is within the metropolis that our future salvation or death warrant will be drafted. For some geographers the future looks bleak. According to Matthew Kahn, professor of economics at the University of California, Los Angeles, and the author of Climatopolis: How Our Cities Will Thrive in the Hotter Future (2010), climate change will have serious consequences for many major cities. As the polar ice cap melts, coastal settlements will be first to feel the flood. San Francisco, London, Rio de Janeiro and New York will all be hit when the waters rise and swamp the lower-lying (and often poorest) communities. San Diego in southern California is especially vulnerable: by 2050, the sea level will be approximately 30-45 cm higher, and temperatures will have risen by an estimated 4.5 degrees. Meanwhile the city will have continued to grow, bringing with it an increased demand for services, including a 35 per cent rise in demands for water. At the same time, and despite rising sea levels, the Colorado River will shrink by around 20 per cent, upping the threat of wildfires in San Diego’s hot, dry hinterlands. Kahn predicts that the city’s population will skew towards the elderly because the young will have a better chance to leave: when the waters rise, society’s most vulnerable will be on the front line. We do not, however, need to rely on speculation to imagine the impact of extreme weather events on the city. We have seen this scenario unfold before. On Thursday, 13 July, 1995, the temperature in downtown Chicago rose to a record 104ºF (40ºC), the high point in an unrelenting week of heat. Combined with high humidity, it was so hot that it was almost impossible to move around without discomfort. At the beginning of the week, people made jokes, broke open beers and celebrated the arrival of the good weather. But after seven days and nights of ceaseless heat, according to the Chicago Tribune: Overheated Chicagoans opened an estimated 3,000 fire hydrants, leading to record water use. The Chicago Park District curtailed programs to keep children from exerting themselves in the heat. Swimming pools were packed, while some sought relief in cool basements. People attended baseball games with wet towels on their heads. Roads buckled and some drawbridges had to be hosed down to close properly.Only once the worst of the heatwave had passed were authorities able to audit the damage. More than 739 people died from heat exhaustion, dehydration, or kidney failure, despite warnings from meteorologists that dangerous weather was on its way. Hospitals found it impossible to cope. In a vain attempt to help, one owner of a meatpacking firm offered one of his refrigeration trucks to store the dead; it was so quickly filled with the bodies of the poor, infirm and elderly that he had to send eight more vehicles. Afterwards, the autopsies told a grim, predictable tale: the majority of the dead were old people who had run out of water, or had been stuck in overheated apartments, abandoned by their neighbours. It is easy to forget that cities are made out of people who live and thrive in the spaces between buildings In response to the crisis, a team from the US Centers for Disease Control and Prevention (CDC) scoured the city for the causes of such a high number of deaths, hoping to prevent a similar disaster elsewhere. The results were predictable: the people who died had failed to find assistance or refuge. They had died on their own, without help. In effect, the report blamed the dead for their failure to leave their apartments, ensure that they had enough water, or check that the air conditioning was working. These two scenarios offer a bleak condemnation of our urban future. Natural disasters appear to be inevitable, and yet we seem largely incapable of readying ourselves for the unexpected. What can we do to prepare, and perhaps prevent, coming catastrophe? New York, 2012, and Chicago, 1995, offer alternative reactions to the same dilemma. For New Yorkers, it seems, the response to climate change is a mechanical one. Architects, planners and engineers are currently looking at ways to stop the rising tides and reduce the damage of hurricane conditions. This has already produced a number of innovations, such as the proposal by the Hudson River Foundation and the US Army Corps of Engineers to lay new oyster beds and stabilise the coastline along the Hudson River; and the large-scale scheme initiated by Mayor Bloomberg to create a storm surge barrier across the mouth of New York Harbour. In the meantime, the Federal Emergency Management Agency (FEMA) has produced flood maps of Manhattan and the surrounding boroughs, charting those neighbourhoods vulnerable to rising waters. In part, it comes down to money. Following the hurricane, the damage was quickly audited and found to have cost the city $19 billion ($60 billion, if you include the federal recovery package). This, it seems, is a price worth paying for the benefits of living in the city. There have been no stories of mass evacuations, or of families moving away for a quieter life. In addition, the city has been quick to announce that it’s prepared to cover whatever it costs to protect the metropolis in the future. This raises a disquieting question: will there ever be a time when the price for safety in Manhattan gets too high? After the heatwave of 1995, Chicagoans opted for another way of dealing with disaster. On the back of the CDC report and independently of the official scientists, Eric Klinenberg, a professor of sociology at New York University, conducted his own research on the heatwave and focussed on the districts worst hit by the disaster rather than on the behaviours of individuals. His findings, published as Heat Wave: A Social Autopsy of Disaster in Chicago (2002), put an emphasis on neighbourhood failures rather than individual folly. Klinenberg found that there were 40 deaths per 100,000 people in the North Lawndale neighbourhood, while in South Lawndale there were only four deaths per 100,000. North Lawndale was a community in decline; it had nowhere for old people to go, no shops, gathering places, or parks. There was no functioning community to look out for the most vulnerable in times of trouble. The failures of the district had been a long-term problem and could be charted systematically: North Lawndale had lost many residents to the suburbs, and their numbers had not been replenished. By contrast, South Lawndale was a thriving community, its streets bustling with people, its churches offering a range of social activities for seniors. By studying the community as a whole, Klinenberg’s research raised interesting questions about how disasters affect a city. In particular, he challenged the idea that a city’s robustness can be judged on the built environment alone. When so much discussion around cities concerns the bricks and mortar, roads and infrastructure, it is easy to forget that cities are made out of people who live and thrive in the spaces between buildings. Klinenberg argued that the robust community was as essential a defence against disaster as any engineering solution, and that social solutions to disaster are ones that politicians and planners ignore at their peril. Disasters are complex things that, after the event, can be read in many different ways. But since the predictably unpredictable occurrence – the so-called ‘black swan event’– has now become an inevitability, many urban planners and environmental scientists have started to discuss the fragility of the city as an urgent issue. The idea of a city being ‘fragile’ might seem ridiculous – the wrong metaphor for the metropolis. But in Antifragile: Things That Gain from Disorder (2012), former Wall Street trader Nassim Nicholas Taleb builds on The Black Swan (2007), his earlier ground-breaking book, which examined the potentially devastating power of unexpected events. Taleb proposes a way of measuring the world that designates organisations, cities, communities and businesses as ‘fragile’ or ‘anti-fragile’. Anything that survives (and thrives) through adversity is non-fragile: it has ‘the singular property of allowing us to deal with the unknown, to do things without understanding them — and to do them well’. If we cannot predict what’s around the corner, Taleb argues, we must at least be ready for any eventuality. We must, in other words, cultivate resilience. The term ‘resilience theory’ was coined in the early 1970s by the Canadian ecologist C S ‘Buzz’ Holling who was fascinated by the relationship between ecology and complexity. Looking at models of how things change, Holling hoped to find the hidden laws that underpin disturbance – whether out of the blue, like fires or explosions, or occurring more slowly, while being similarly transformative. It is now increasingly common to study the metropolis as an ecosystem, in which, most obviously, the city is on the frontline of the climate change debate As Taleb would later do, Holling attempted to measure resilience as a quality. For example, he suggested that robustness could be judged by the amount of time it takes for an equilibrium to recover, or else on the strength of its rigidity — ‘the capacity of a system to absorb disturbance and reorganise while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks’. He nominated four key factors that underpin resilience: latitude, or, how far the system can be attacked; resistance, or, the rigidity of the structure to resist change; precariousness, or, the fragility of the current state of the system; and finally panarchy, which refers to the degree to which different systems connect and interact. Holling’s ideas have proved powerful tools in a number of arenas; ecology, complexity and, perhaps not surprisingly, urban planning. For a couple of decades now, urban thinkers have been using complexity theory to understand how cities work. It was the late Jane Jacobs, perhaps the most influential writer on the modern city, who wrote in The Death and Life of Great American Cities (1961): Under the seeming disorder of the old city, wherever the old city is working successfully, is a marvelous order for maintaining the safety of the streets and the freedom of the city. It is a complex order…composed of movement and change, and although it is life, not art, we may fancifully call it the art of the city and liken it to the dance.In addition to a growing understanding of the city as an expression of complexity, it is now increasingly common to study the metropolis as an ecosystem, in which, most obviously, the city is on the frontline of the climate change debate. As such, it is an urgent testing ground for ideas about resilience, even if the urgency presents a tension between human and ecological concerns. Cities are where most people live: in 2007, the United Nations announced that by the end of 2008 more than 50 per cent of the world’s population would be urban, rising to 75 per cent by 2050. Yet cities are also where the vast majority of the world’s energy resources are burned up. For example, New York generates more greenhouse gases per square foot, uses more energy, and produces more waste than any other place in America of a comparable size. It may be a relief to realise that, counter-intuitively, living in the city is perhaps greener than many imagine. Despite the statistics above, New Yorkers individually are more energy efficient, emit less carbon and produce less waste than the average person outside the city. In 2009, the New Yorker journalist David Owen revealed the astonishing truth that living in New York was a more environmentally friendly lifestyle than living in the suburbs or even the countryside. He showed that contrary to most assumptions, when people live in close proximity to each other, in a walkable environment, they actually become far more environmentally efficient. Despite the heavy energy usage and the costs of transportation within the city, it is nonetheless a remarkably smart way of gathering people together. Whether the city becomes our ark rather than our concrete coffin is up to us. Most urban thinkers view planning for a resilient city as either a political or a technical conundrum – a matter of top-down diktat (a kind of austerity drive writ large), or else a frenzy of innovation that will help us use less energy, live more efficiently, prepare and protect us from the next black swan. But increasingly they feel there is more robustness to be gained from investing in smart, small-scale, communications-oriented solutions rather than mammoth building projects. Flood controls such as the Thames Barrier in London, and the MOSE system of barriers across the Venice lagoon, have their role to play; but resilience is less about brute resistance than about staying one step ahead of disaster, using smart-tech avoidance mechanisms. Already internet technology and big data is aiding resilience. Google has modelled sophisticated means for tracking the spread of flu by charting search results from its engines, while the Twitter Earthquake Detector has been used to link seismometers to social media to assess the scale and impact of a tremor. Other big players, including IBM, Cisco, Siemens, Accenture, McKinsey, and Booz Allen are entering the debate on the intelligent city and developing tools to bring efficiency and sustainability into a connected metropolis. They are talking to city halls and offering end-to-end solutions — the complete package to retrofit the everyday city for the 21st century, coupled to a very hard sell. In one scenario, the city itself becomes the computer, in keeping with the rules of the information age. In this new, connected city, real-time information monitors and regulates the urban fabric, deploying buildings, objects and traffic lights as sensors and activators. In the words of Carlo Ratti, Director of the SENSEable City Lab at MIT, it will function like ‘a computer in the open air’. A sentient place, the city will not just gather information but change and react to feedback. Commenting on these developments without any apparent sense of irony, Assaf Biderman, The SENSEable City Lab’s Associate Director, says that smart technologies can make ‘cities more human’. The SENSEable City Lab is at the forefront of developing a number of tools to help us navigate the city. Take the Copenhagen Wheel (first used on bicycles in the Danish capital in 2009) – a device that both recycles the energy generated through pedal power, and also contains sophisticated sensors that can be used to inform the rider about oncoming traffic, the condition of the roads or the best route around a diversion. Since 2011, SENSEable City Lab scientists have also been testing an open platform that pools all the information generated and needed by the city-state of Singapore. ‘LIVE Singapore!’ taps data from various government departments to produce a real-time data feed of everything going on in the city, which can be accessed on multiple devices by politicians, traffic police and ordinary citizens. The project already boasts an isochronic map that shows how long it takes to move about Singapore at various times of day. The data feed can also improve the distribution of taxis around the city, and model how big events such as the Singapore Formula One Grand Prix might disrupt city infrastructure. In the humanised city envisaged by scientists and planners, we will live in smart buildings and drive smart cars. Data will be collected every time we use public transport, as London’s Oyster Card system does now. Our cars will be able to alert the garage to report a fault. In fact, Mercedes-Benz has already equipped its 2013 models with a telematic system that communicates in this way and that upgrades its software on the move. Most cities already have traffic lights rigged with sensors that detect congestion and traffic flow; and most modern office blocks are now smart buildings that can regulate internal temperature and lighting automatically. Meanwhile, face recognition software, while unreliable at present, might one day be used for everything from banking to security. All these innovations exploit the link between robustness and sustainability, helping us reduce energy use, better utilise our resources, and build better homes. In contrast to future-proofing existing cities, some urban theorists are excited by the possibility of starting all over again. Masdar City, designed by the leading British architect Sir Norman Foster in the desert outside Abu Dhabi, is currently being built by the Abu Dhabi Future Energy Company. Foster has always pioneered technical solutions to questions of sustainability, investing in new materials and in designs that reduce energy use, while always remaining visually innovative. At Masdar, he takes the idea of the sustainable city to its current limits. Future shock: Masdar City’s PRT (Personal Rapid Transit) station.Masdar is being built over 17 sq km, with ambitions to be a zero-carbon environment — the epitome of cutting-edge sustainable design. Phase One of the build sits upon a raised base so as to benefit from desert breezes, which create a natural cooling system. There’s argon gas insulation between the rammed earth and the buildings’ steel walls, and photovoltaic piles outside the city to generate energy. One of Foster’s most futuristic ideas are the driverless personal rapid transport pods that run along rails through the underground level of the city. This allows for a tighter street grid, which once again helps keep the ground level shady and cool. Despite their dazzle, many of the gleaming innovations discussed here suffer from a problem that cyber critic Evgeny Morozov, author of The Net Delusion (2011), calls the ‘solutionism’ of the Information Age — the prophetic notion that, if it were only allowed to do so, technology could deliver complete answers to our every problem. But it is not that simple. Resilience in the face of natural disasters is more than just a technical aspiration: it is a social one. Mayors have worked out that green and pleasant cities boost popularity, making people to want to visit, work and live there As recent climate change summits in Kyoto, Copenhagen, and Rio have shown, the level of trust between nations to put environmental standards before profit is very low. Which perhaps suggests that it’s not nations but cities — already the locus of where the future will be won — that might be the engines of change. Cities can organise themselves, and among themselves. The C40 group of cities – created in 2005 to unite leading world centres in combatting climate change – has now published its goals and projections for reducing carbon emissions: Buenos Aires aims to reduce emissions by a third by 2030; Madrid, to halve them by 2050; while Chicago hopes to slash emissions by 80 per cent by 2050. In addition to carbon emissions, the C40 group highlights eight key areas to prioritise — building, energy, lighting, ports, renewables, transport, waste and water — and its members are calling for radical solutions. So far, responses have been diverse. In the US, San Francisco has begun planning the largest city-owned solar-power station in the world. In Norway, Oslo has introduced 10,000 ‘intelligent’ street lamps that have cut energy consumption by 70 per cent and saved 1,440 tons of CO² for each year of operation since 2004. In South Africa, the Emfuleni city authorities have installed a citywide water efficiency system that has reduced the pressure within the network, thus reducing costly leakages. All this is proof, at least, that city governments acknowledge the issues and are searching for interesting solutions, regardless of national policy. In 2012, the British government released its own document on resilience. The UK Climate Change Risk Assessment covers all areas of threat, including agriculture, business, health, coastal erosion, transport and the built environment, acknowledging that the major threats to Britain’s cities are flooding, overheating, subsidence and the urban heat-island effect, which causes inner cities to be two or three degrees hotter than surrounding rural areas. What the report makes clear is that the city faces not a single threat, but a complex combination of concerns that defy any single solution. Our infrastructure is so knotted that energy, water, waste management, transport, and information and communications technology are inseparable. Since Hurricane Sandy, Mayor Bloomberg has set up an Office of Policy and Strategic Planning (dubbed, by the New York Times, the office of ‘New Yorkology’), which is using big data to help the city run more efficiently. The team has gathered vast quantities of data about the 8 million citizens within the Five Boroughs, using information archived in the city’s data store, as well as information from millions of calls to the 311 phone line - from complaints about noise, parking analysis, housing problems, to real time subway information - to make better policy decisions, and allocate resources when and where they are most needed. This move comes on top of Bloomberg’s ‘Greener, Greater Buildings’ programme, launched last May to retrofit the city’s existing buildings so that they emit as little carbon as possible. Both schemes covertly assert New York’s independence, suggesting that the city — with its very different needs and challenges — sets its own agenda and standards. Mayors, it seems, have worked out that green and pleasant cities boost popularity, making people to want to visit, work and live there. Not least, the environmentally friendly city is one that addresses the problems of resilience, because in promoting sustainability it enhances community. Where some experienced the rising waters as little less than a hindrance, others had their lives devastated But is this enough? In spite of Mayor Bloomberg’s pre-Hurricane Sandy measures, whereby 25 city agencies set a focussed agenda for the Five Boroughs that covered every aspect of New York, from housing, parks, water, waste and air quality, to how New York could remain competitive in a global marketplace, the waters still rose. And what became clear in the aftermath was that the disaster was different for different people. It underscored the gulf between the haves and the have-nots. Where some experienced the rising waters as little less than a hindrance, others had their lives devastated. The day after the disaster, joggers ran their usual route round Central Park, while activists working under the banner of ‘Occupy Sandy’ were busy forming rescue and construction teams to help in the flattened areas of Staten Island. Politicians and engineers rarely talk about trust when discussing the creation of a robust community, perhaps because we are constantly being told that trust is a busted currency. Recent polls suggest that our levels of trust in politicians, journalists, lawyers, and police are in decline, and that society is fragmenting as a result. But that is just one definition of trust; it fails to embrace how trust governs our lives, tempers our dealings with one another, and is the glue that keeps our cities together. We might no longer trust politicians but we do still trust. According to Eric Uslaner, professor of government and politics at the University of Maryland, trust is not the result of business deals or social interaction; instead it is hard-wired into our social selves and our basic belief in the goodwill of others. It is not an expedient or calculated quality. In Uslaner’s view, ‘A trusts B to do X’ is replaced by ‘A trusts’. Trust is thus connected to optimism, confidence, tolerance and well-being, not just strategic reckoning. It leads to voluntary social engagement, which in turn leads to the promotion of equality and democracy. The generalised truster, says Uslaner, is ‘happier in their personal lives and believe[s] that they are masters of their own fate. They are tolerant of people who are different from themselves and believe that dealing with strangers opens up opportunities more than it entails risks.’ This kind of trust often emerges in unexpected situations, prompting us to rethink what actually happens during moments of disaster. In A Paradise Built in Hell (2009), written after Hurricane Katrina broke the levees and flooded New Orleans in 2005, the writer and cultural critic Rebecca Solnit looked at the spontaneous proliferation of cooperation amid calamity. ‘When all the ordinary divides and patterns are shattered,’ she wrote, ‘people step up — not all, but the great preponderance — to become their brother’s keeper’. In a series of case studies spanning the San Francisco earthquake of 1906, to Katrina in 2005, Solnit cites numerous instances of communality in adversity. In the wake of Hurricane Sandy, there has been little discussion of the social lessons of the disaster, as opposed to the latest smart solutions One example: as hundreds of people, post-Katrina, were left abandoned and in desperate straits, it was not the officials, police or even FEMA that assumed control, searching out the most needy. Quite the reverse. As President George W Bush circled above the flood zone in Air Force One, the authorities were fully armed and spreading stories of rampaging looters and rapists. Meanwhile, ordinary citizens braved the floodwaters, using canoes to ferry their neighbours to safety, and rescuing abandoned pets; while eight doctors and 30 nurses remained behind in New Orleans’s flooded City Jail to tend to vulnerable prisoners. Yet such resilience-building trust cannot be taken for granted: it ebbs and flows with the levels of inequality found within a community. And inequality destroys trust. It has become a truism to note that our cities are blindingly unequal. Cities are where you find the super-rich and the long-term impoverished crammed together side by side. They are where wealth is made, and poverty both prevalent and hidden. In London, for example, the richest 10 per cent of the city is 850 times wealthier than the bottom 10 per cent. What hope has elective trust of thriving in such an environment? And what hope disaster planning, given that most disasters hit cities disproportionately, and that the poorest and most precarious groups suffer hardest? In the wake of Hurricane Sandy, there has been little discussion of the social lessons of the disaster, as opposed to the latest smart solutions. Most tellingly, FEMA’s map of the vulnerable neighbourhoods showed that, Lower Manhattan aside, the most affected areas were also the poorest ones, least able to protect themselves. With insurance premiums rising in these areas, and with those who can moving away, these communities will become increasingly precarious. Arguably, the most interesting post-hurricane statistics were gathered by Occupydata NYC, the activist group behind the Occupy Sandy relief effort: more that 3400 volunteers turned up to help in the days after the flooding; 27,000 meals were served from makeshift relief kitchens; 60 households opened up their homes to help the newly homeless; a motor pool was created to transport aid to where it was needed; and teams of helpers dispersed across Staten Island to help with repairs. The group soon became essential to the relief administration, often working faster than the city government itself. This example reminds us of the dangers of judging the robustness of a city by how well the buildings stand up against the wind and the waves, or whether the trains and planes keep on schedule. These things are important for the normal running of a city, but they do not unlock the key to resilience. If we plan on building resilience and disaster-proofing the metropolises of the future, then we have to think just as hard about tackling inequality and augmenting the social strength of the community as we do about engineering, smart technology and eco-sustainability. | Leo Hollis | https://aeon.co//essays/is-technology-the-key-to-future-proofing-our-cities | |
Future of technology | New technologies are emerging that could radically reduce our need to sleep - if we can bear to use them | Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day. To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in. As with most human behaviours, it’s hard to tease out our biological need for sleep from the cultural practices that interpret it. The practice of sleeping for eight hours on a soft, raised platform, alone or in pairs, is actually atypical for humans. Many traditional societies sleep more sporadically, and social activity carries on throughout the night. Group members get up when something interesting is going on, and sometimes they fall asleep in the middle of a conversation as a polite way of exiting an argument. Sleeping is universal, but there is glorious diversity in the ways we accomplish it. Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another. Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it. Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming. One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type. Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom. Any remedy for sleepiness must target the brain’s prefrontal cortex. The executive functions of the brain are particularly vulnerable to sleep deprivation, and people who are sleep-deprived are both more likely to take risks, and less likely to be able to make novel or imaginative decisions, or to plan a course of action. Designer stimulants such as modafinil and armodafinil (marketed as Provigil and Nuvigil) bring these areas back online and are highly effective at countering the negative effects of sleep loss. Over the course of 60 hours awake, a 400mg dose of modafinil every eight hours reinstates rested performance levels in everything from stamina for boring tasks to originality for complex ones. It staves off the risk propensity that accompanies sleepiness and brings both declarative memory (facts or personal experiences) and non-declarative memory (learned skills or unconscious associations) back up to snuff. It’s impressive, but also roughly identical to the restorative effects of 20 mg of dextroamphetamine or 600 mg of caffeine (the equivalent of around six coffee cups). Though caffeine has a shorter half-life and has to be taken every four hours or so, it enjoys the advantages of being ubiquitous and cheap. For any college student who has pulled an all-nighter guzzling energy drinks to finish an essay, it should come as no surprise that designer stimulants enable extended, focused work. A more challenging test, for a person wired on amphetamines, would be to successfully navigate a phone call from his or her grandmother. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one’s wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. They came to light in an ingenious experimental paradigm designed at the government agency Defence Research and Development Canada. In 1996, the defence psychologist Martin Taylor paired volunteers and gave each member of the duo a map. One of the two maps had a route marked on it and the task was for the individual who had the marked map to describe it accurately enough for their partner to reproduce it on their map. Meanwhile, the researchers listened in on the verbal dialogue. Control group volunteers often introduced a landmark on the map by a question such as: ‘Do you see the park just west of the roundabout?’ Volunteers on the stimulant modafinil omitted these feedback requests, instead providing brusque, non-question instructions, such as: ‘Exit West at the roundabout, then turn left at the park.’ Their dialogues were shorter and they produced less accurate maps than control volunteers. What is more, modafinil causes an overestimation of one’s own performance: those individuals on modafinil not only performed worse, but were less likely to notice that they did. The friendly fire incident resulted in a court martial, but in the media it was the drugs that were on trial One reason why stimulants have proved a disappointment in reducing sleep is that we still don’t really understand enough about why we sleep in the first place. More than a hundred years of sleep deprivation studies have confirmed the truism that sleep deprivation makes people sleepy. Slow reaction times, reduced information processing capacity, and failures of sustained attention are all part of sleepiness, but the most reliable indicator is shortened sleep latency, or the tendency to fall asleep faster when lying in a dark room. An exasperatingly recursive conclusion remains that sleep’s primary function is to maintain our wakefulness during the day. Since stimulants have failed to offer a biological substitute for sleep, the new watchword of sleep innovators is ‘efficiency’, which means in effect reducing the number of hours of sleep needed for full functionality. The Defense Advanced Research Projects Agency (DARPA) – the research arm of the US military – leads the way in squeezing a full night’s sleep into fewer hours, by forcing sleep the moment head meets pillow, and by concentrating that sleep into only the most restorative stages. Soldiers on active duty need to function at their cognitive and physiological best, even when they are getting only a few hours sleep in a 24-hour cycle. Nancy Wesensten, a psychologist for the Center for Military Psychiatry and Neuroscience at the Walter Reed Army Institute of Research in Maryland, has a mission to find ways to sustain soldier operations for longer, fighting the effects of acute or chronic sleep deprivation. She has argued that individual’s sleep should be regarded as an important resource, just like food or fuel. Working with the Marine corps, Wesensten is not trying to create a super warrior who can stay awake indefinitely. She does not even see herself trying to enhance performance, as she already considers her subjects the elite of the elite. Everyone has to sleep eventually, but the theatre of war requires soldiers to stay awake and alert for long stretches at a time. Whereas the US Army and Air Force have a long history of adopting stimulants — pioneering modafinil applications and dextroamphetamine use in 24-hour flights — the Marines generally will not accept any pharmacological intervention. Like Wesensten, Chris Berka, the co-founder of Advanced Brain Monitoring (ABM), one of DARPA’s research partners, told me that she is cautious about the usefulness of stimulants, ‘Every so often, a new stimulant comes along, and it works well, and there’s a lot of interest, and then you don’t hear anything more about it, because it has its limitations.’ Some failed Air Force missions have drawn attention to the dangers of amphetamine-induced paranoia. Less than a decade after a 1992 Air Force ban on amphetamines, ‘go pills’ were quietly reintroduced to combat pilots for long sorties during the war in Afghanistan. On 17 April 2002, Major Harry Schmidt, who had trained as a top gun fighter pilot, was flying an F-16 fighter jet over Kandahar. Canadian soldiers below him were conducting an exercise, and controllers told Schmidt to hold his fire. Convinced he was under attack, the speed-addled pilot let loose and killed four Canadian soldiers. The friendly fire incident resulted in a court martial, but in the media it was the drugs that were on trial. With military personnel in mind, ABM has developed a mask called the Somneo Sleep Trainer that exploits one- or two-hour windows for strategic naps in mobile sleeping environments. Screening out ambient noise and visual distractions, the mask carries a heating element around the eyes, based on the finding that facial warming helps send people to sleep. It also carries a blue light that gradually brightens as your set alarm time approaches, suppressing the sleep hormone melatonin for a less groggy awakening. Sleep ideally contains multiple 60- to 90-minute cycles, from slow-wave sleep back up to REM, but a 20-minute nap is all about dipping into Stage 2 as quickly as possible. The idea of the Somneo is to fast-track through Stage 1 sleep, a gateway stage with few inherent benefits, and enter Stage 2, which at least restores fatigued muscles and replenishes alertness. For Marines at Camp Pendleton near San Diego, four hours of sleep or less is one of the rigours of both basic and advanced training. As a character-building stressor, night after night of privation is a personal endurance test but, as Wesensten has argued, it runs counter to other goals of their training, such as learning how to handle guns safely, and then remembering that information in a month’s time. Berka agrees. ‘We demonstrated cumulative effects of chronic sleep deprivation, even prior to deployment, and it was having an impact on learning and memory,’ she explained, after ABM had brought brain-monitoring devices into the camp for 28 days of measurement. ‘It was defeating the purpose of training for new skill sets, and command acknowledged this was important.’ It’s not cheap to equip dozens of trainees with night goggles and train them to distinguish foes from friends — all the while paying out salaries. Darkness and diet are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber The Somneo mask is only one of many attempts to maintain clarity in the mind of a soldier. Another initiative involves dietary supplements. Omega-3 fatty acids, such as those found in fish oils, sustain performance over 48 hours without sleep — as well as boosting attention and learning — and Marines can expect to see more of the nutritional supplement making its way into rations. The question remains whether measures that block short-term sleep deprivation symptoms will also protect against its long-term effects. A scan of the literature warns us that years of sleep deficit will make us fat, sick and stupid. A growing list of ailments has been linked to circadian disturbance as a risk factor. Both the Somneo mask and the supplements — in other words, darkness and diet — are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber. These can bring the effect of a truncated night’s rest up to the expected norm — eight hours of satisfying shut-eye. But proponents of human enhancement aren’t satisfied with normal. Always pushing the boundaries, some techno-pioneers will go to radical lengths to shrug off the need for sleep altogether. Charles ‘Chip’ Fisher, an entrepreneur from New York, sits in front of a full bookcase, hands folded, ready to pitch his product to the internet. On a polished dark wood table in front of him rests the device, consisting of a power source that delivers electrical current to two spongy yellow spheres. To begin the online instructional video, Fisher dips the sponges in a glass of water and tucks them, dripping, under a headband, just above his sideburns. The device is dialled up, and Fisher blinks calmly into the camera as the pulses penetrate his skull to the prefrontal cortical area of his brain. What distinguishes his device — FDA approved since 1991 — from the kind of quack products flogged to impulse-buyers is that it is staggeringly effective at treating insomnia, among other ailments. It’s also part of a new class of armament in the war against sleep. Fisher is the president of Fisher Wallace Laboratories of Madison Avenue in New York, and the consumer electronics industry has been a family affair for him since the golden age of the vacuum tube, when his father’s company marketed the ubiquitous Fisher Radio receivers. His pitch has all the trappings of a late-night infomercial — the testimonials, the money-back guarantee, the clips from mid-tier CBS television shows — every kind of emotional argument likely to sway a rationalist away from a purchase. Fisher acquired the patent for a transcranial stimulation device from the brothers Saul and Bernard Liss, both electrical engineers from the Massachusetts Institute of Technology. He sees the body as a collection of materials, some more conductive and others more resistant to electricity. ‘The need to pierce bone and skull means we need a higher carrier frequency, which is the 15,000 Hz frequency. That’s combined with 500 Hz and 15 Hz,’ Fisher told me. ‘It took eight to 12 years to derive those values. The body is influenced by frequencies between zero and 40 Hz.’ Those searching for a treatment for insomnia are Fisher’s biggest and fastest-growing market. Someone with intractable insomnia will try just about anything to get some sleep. Transcranial direct-current stimulation (tDCS) is a promising technology in the field of sleep efficiency and cognitive enhancement. Alternating current administered to the dorsolateral prefrontal cortex through the thinnest part of the skull has beneficial effects almost as mysterious as electroconvulsive therapy (ECT), its amnesia-inducing ancestor. Also known as ‘shock therapy’, ECT earned a bad name through overuse, epitomised in Ken Kesey’s novel One Flew Over the Cuckoo’s Nest (1962) and its 1975 film adaptation, but it is surprisingly effective in alleviating severe depression. We don’t really understand why this works, and even in today’s milder and more targeted ECT, side effects make it a last resort for cases that don’t respond to drug treatment. In contrast to ECT, tDCS uses a very mild charge, not enough directly to cause neurons to fire, but just enough to slightly change their polarisation, lowering the threshold at which they do so. Electrodes on the scalp above the hairline, in line with the temples, deliver a slight, brief tingling, after which there is no sensation of anything amiss. ‘We use that tingling feeling to create our sham paradigm,’ Andy McKinley of the US Air Force Research Laboratory’s Human Effectiveness Directorate told me. ‘The control subjects receive only a few seconds of stimulation — not enough to have any cognitive effects but enough to give them the same sensation on their skin.’ After a half-hour session of the real treatment, subjects are energised, focused and keenly awake. They learn visual search skills at double the speed, and their subsequent sleep — as long as it does not fall directly after the stimulation session — is more consolidated, with briefer waking periods and longer deep-sleep sessions. To combat insomnia, this type of treatment is used daily in two-week sessions, according to clinical recommendations by Richard Brown, professor of psychiatry at Columbia University College of Physicians and Surgeons. The mechanism might lie in its anti-anxiety effects: patients familiar with Xanax or Valium describe their post-tCDS mood as a clear-headed version of taking these medications. Negative effects on the brain have not yet been observed, and the FDA has approved some devices, such as the Fisher Wallace Stimulator, for unsupervised home use, but long-term effects are still unknown. The neurologist Soroush Zaghi and his team at Harvard Medical School are on the trail of how, exactly, these clinical outcomes are achieved. Once this is established, potential dangers will be easier to look for. Using a slightly different technique — transcranial magnetic stimulation (TMS), which directly causes neurons to fire — neuroscientists at Duke University have been able to induce slow-wave oscillations, the once-per-second ripples of brain activity that we see in deep sleep. Targeting a central region at the top of the scalp, slow-frequency pulses reach the neural area where slow-wave sleep is generated, after which it propagates to the rest of the brain. Whereas the Somneo mask is designed to send its wearers into a light sleep faster, TMS devices might be able to launch us straight into deep sleep at the flip of a switch. Full control of our sleep cycles could maximise time spent in slow-wave sleep and REM, ensuring full physical and mental benefits while cutting sleep time in half. Your four hours of sleep could feel like someone else’s eight. Imagine being able to read an extra book every week — the time adds up quickly. Never mind that if we are to speak of maintaining natural sleep patterns, that ship sailed as soon as artificial light turned every indoor environment into a perpetual mid-afternoon in May The question is whether the strangeness of the idea will keep us from accepting it. If society rejects sleep curtailment, it won’t be a biological issue; rather, the resistance will be cultural. The war against sleep is inextricably linked with debates over human enhancement, because an eight-hour consolidated sleep is the ultimate cognitive enhancer. Sleepiness and a lack of mental focus are indistinguishable, and many of the pharmaceutically based cognitive enhancers on the market work to combat both. If only it were possible for the restorative functions that happen during sleep to occur simply during waking hours instead. One reason why we need to shut down our conscious selves to perform routine maintenance is that our visual system is so greedy. Glucose metabolism is a zero-sum game, and functional MRI studies show a radically different pattern of glucose metabolism during sleep, with distinct regions activated either in active or sleep states but not in both. As soon as we close our eyes for sleep, a large proportion of available energy is freed up. Just as most planes must be grounded to refuel, we must be asleep to restore our brains for the next day. A radical sleep technology would permit the equivalent of aerial refuelling, which extends the range of a single flight (or waking day). Such attempts are likely to meet with powerful resistance from a culture that assumes that ‘natural’ is ‘optimal’. Perceptions of what is within normal range dictate what sort of human performance enhancement is medically acceptable, above which ethics review boards get cagey. Never mind that these bell curves have shifted radically throughout history. Never mind that if we are to speak of maintaining natural sleep patterns, that ship sailed as soon as artificial light turned every indoor environment into a perpetual mid-afternoon in May. Our contemporary sleep habits are not in any sense natural and ancestral human sleeping patterns would be very difficult to integrate into modern life. In the 1990s, the psychiatrist Thomas Wehr of the National Institute of Mental Health in Maryland put subjects on a natural lighting schedule and observed complex sleeping rhythms. Falling asleep at dusk and waking at dawn, volunteers experienced a sort of anti-nap in the middle of the night — a two-hour period of quiet, meditative repose during which prolactin levels spiked. This is backed up by historical records from pre-industrial times: early modern English households observed ‘first sleep’ and ‘second sleep’, with the time in between used to pray or socialise with family members. Human enhancement is now being driven by military imperatives, at least in the US, because civilian society is more conservative in its approach. Dedicated divisions such as the US Air Force’s Human Effectiveness Directorate try to make humans better at what they do naturally. It’s a missed opportunity for a society-wide push to understand and reduce our need to power the brain down for hours every day. Every hour we sleep is an hour we are not working, finding mates, or teaching our children; if sleep does not have a vital adaptive function to pay for its staggering opportunity cost, it could be ‘the greatest mistake the evolutionary process ever made’, in the words of Allan Rechtschaffen, the pioneering sleep researcher and professor of psychiatry at the University of Chicago. In her award-winning Beggars trilogy of the 1990s, the American science fiction writer Nancy Kress posited a world in which genetic modification has become de rigeur. One of these ‘genemods’ — cooked up by gifted children let loose in a lab — eliminates sleep and even bucks the sci-fi convention of dire side effects, instead endowing the fortunate Sleepless with bonuses of greater intelligence and emotional stability. The side effects are, instead, societal — the unevenly distributed technology becomes the basis of a social schism, in which a perpetually productive elite rules a sleep-dependent majority of Livers. Kress presciently anticipated the ethical implications of our emerging era of what the neuroscientist Roy Hamilton of the University of Pennsylvania has dubbed ‘cosmetic neuroscience’, or the tailoring of our ancient brains to suit our modern demands. Should technologies such as tDCS prove safe and become widely available, they would represent an alternate route to human longevity, extending our conscious lifespan by as much as 50 per cent. Many of us cherish the time we spend in bed, but we don’t consciously experience most of our sleeping hours — if they were reduced without extra fatigue, we might scarcely notice a difference except for all those open, new hours in our night time existence. Lifespan statistics often adjust for time spent disabled by illness, but they rarely account for the ultimate debilitation: lack of consciousness. Now a life lived at 150 per cent might be within our grasp. Are we brave enough to choose it? | Jessa Gamble | https://aeon.co//essays/technology-to-cut-down-on-sleep-is-just-around-the-corner | |
Biology | Simplistic ideas of how genes ‘cause’ traits are no longer viable: life is an orderly collection of uncertainties | DNA is a metaphor for our age. It conveys the powerful idea that our identity is scientifically reducible to an unambiguous, determinative code. We hear this idea expressed all the time. The car company Bentley advertises for employees saying: ‘Hard work is in our DNA.’ The footballer David Beckham says: ‘Football is in England’s DNA.’ And a toll-collector for the Golden Gate Bridge in San Francisco says: ‘Our DNA is embedded in this bridge.’ Everyone knows these statements aren’t literally true, but although we might understand their figurative meaning, they continue to reflect, and influence, how we think. Even biologists, being quite human, too often think metaphorically and assign properties to genes that genes don’t have. The metaphor works because our society has a deeply embedded belief in genes as clearly identifiable material things which explain our individual natures, making them inherent from the moment of our conception and thus predictable. If hydrogen and oxygen are the causal atoms of water, genes are the causal atoms of our existence. And we’re surrounded. News stories appear every week announcing the discovery of a gene ‘for’ this trait or that. Direct-to-consumer (DTC) genetic testing and ancestry determination companies are thriving, because consumers believe their genes will tell them more about their ancestry than family stories can. They also want to know whether they are fated to suffer from particular diseases, and they believe that this too is written in their genes. Sperm banks suggest that prospective parents consider a potential donor’s hobbies, the languages he speaks, his favorite foods, or his educational attainment, as though these traits are written in his sperm. But try or wish as we might, the idea that everything about us is reducible to genes is not supported by real-world observations. Indeed, a simplistic picture of genes as individual causal things with straightforward effects is out of date in many ways. For starters, we now know that no gene acts alone. Complex traits — such as the diseases that most of us will eventually get — result from the interactions among multiple genes and/or environmental factors. Predicting disease depends not just on identifying our genotype, the particular, unique set of DNA sequence variants we inherited, but also on predicting our future environments — what we’ll eat, drink, or breathe, the medications we’ll take, and so on — which neither DTC companies nor anyone else, no matter how ‘expert’, can do. Because environments vary and every genome is unique, multiple studies of a given trait or disease will generally yield different results. DTC estimates of disease risk are inherently probabilistic, not fixed. The same applies to choosing a sperm donor based on behavioural traits — of which any genetic component would likely be swamped by cultural and environmental factors, such as the food the donor was exposed to when growing up, or whether he could afford to go to university. The metaphor that corporations and nations have their own DNA, and the belief that genes have straightforwardly determinative effects, might provide a comfortable, tempting image of simple cause and effect. But it’s akin to replacing the religious concept of ‘soul’ with the modern, scientific one of ‘gene’, and that’s very misleading. It tends to assign a kind of fixed metaphysical essence, analogous to Calvinism’s predestination, and drastically simplifies what are actually complex phenomena (dogmatic beliefs are like that). And there are consequences. Genes are certainly real, so it’s important to understand what they can tell us about ourselves. You might be told that, based on your genotype, you have a (let’s say) 15 per cent chance of heart disease. This is a risk, or probability, not a certainty, nor anything like it. Probabilities are not the same as ‘causes’, and they can be extremely difficult to grasp. For example, even in the simplest situation, such as when we flip a coin to see who pays for the drinks, we might say our thumb is the cause of the flip itself, but we tend to think of the actual result — the ‘heads’ or ‘tails’ — as down to ‘chance’. But what do we mean by chance? It is easy to wrap our heads around ideas such as coin-flipping by assuming that every flip has a 50-50 chance of a heads or tails result. Sounds simple enough, but what if we need to predict the specific outcome of a large number of such flips, somewhat like the challenge we face in predicting, from a person’s genotype, the risk of life events such as a heart attack or diabetes? Each of us has a unique set of variants in perhaps hundreds of different genes that separately contribute to the probability of disease. What will each one do? Will they flip as ‘illness’ or ‘health’? Is that even a realistic question? Unlike coin-flipping, disease prediction depends on knowing, assuming or guessing the underlying risk associated with each individual genetic variant, risks that differ from gene to gene, and that do not work like the simple heads and tails of a coin (which, if the coin is fair, will always carry the same risk). What kinds of ‘probabilities’ are they when it comes to understanding what can be predicted from an individual’s genes and the major life decisions that might follow? Just as a coin heavily biased towards heads can come up tails on a given flip, a person inheriting a genotype that raises the risk of diabetes might not in fact get the disease. And risks can easily be perceived as more serious than they actually are, even if we assume the risk estimate is solid. If the risk of a given disease is, say, 2 per cent in the general population, and our best guess is that your genotype raises that by a whopping 25 per cent, that still only changes your actual risk to 2.5 per cent. Genes must be contributing to risk in important ways. But if so, how can they be as slippery as eels when we try to find them? So far, we’ve considered purely physical traits. What role do genes play in non-physical traits such as behavior, or even the ultimate questions of consciousness and free will? Here, the metaphoric replacement of ‘soul’ by ‘gene’ works in a different way. How much of our feelings, thoughts and behavior is actually determined from the moment we are conceived, and could in principle be read like a computer program from our genome? The extent to which we have free will is a fundamental aspect of how we view our ‘selves’, and for many religions, relates to whether we can be held responsible for our moral behavior. The scientific view, on the other hand, goes something like this: we live in a totally material world made of matter, energy, and the forces that connect them. Since genes are the fundamental causal elements of life, it would seem inevitable that, if we knew enough, we could predict everything about all of us — our health, our behavior, and our ideas. The alternative would seem to be mysticism — invoking some sort of immaterial something-or-other that we can’t measure but that affects who and what we are. But if genetic prediction is so unreliable and complex, how did our view of ourselves get so entangled in genetic determinism in the first place, and what might all this tell us about not just physical traits, but such elusive ideas as free will? Today’s gene metaphor is a fabric woven of two threads from the 19th century. In 1858, Alfred Russel Wallace and Charles Darwin proposed a stunning new framework for understanding life in a way that was entirely materialistic and freed from mysticism. The diversity of life, they said, is due to the historical process of evolutionary divergence from common ancestry, in which present-day traits and functions are an outcome of natural selection. Darwin and Wallace developed their theory in the Newtonian era, when the aspiration of science was to understand existence in terms of ‘laws of nature’. Darwin viewed natural selection as, like gravity, a ubiquitous, essentially deterministic causal force in a relentlessly competitive world, a view he expressed in the foundation text of evolutionary biology, On the Origin of Species (1859). Evolutionary determinism was the first thread of the gene metaphor. Natural selection preserves only what is inherited from the successful organisms in the past. The second thread comes from Darwin’s contemporary, Gregor Mendel, who conducted his studies of peas in order to understand the nature of inheritance. His findings also fit the Newtonian worldview perfectly. If natural selection was a law of nature, like gravity, then Mendel’s laws of inheritance promised to identify the fundamental building blocks of biological causation. By choosing specific traits that he knew bred true, Mendel identified a pattern of inheritance that provided perhaps the most powerful tool for research design in the history of science. The genetic research that followed eventually led to the identification of the nature of DNA, the locations and structure of genes in DNA, and the understanding of how they code for proteins. But that same Mendelian thinking made us conceptual prisoners of the deterministic, law-like interpretation of genetic function that leads us to think of traits themselves, not just genes, as discretely packaged units, produced by discretely packaged genes. That suggests that a pea seed already contains mini-green peas, or that a fertilised human egg contains a tiny human: a kind of genetic superstition. Mendel showed that inheritance was probabilistic in the same sense as coin-flipping. Each parent carries two copies of every gene, and they each transmit, at random, one of those copies to each of their offspring. But once the particular randomly transmitted copies are inherited, their effects in the offspring follow causally deterministic principles: the resulting peas were either green or yellow, wrinkled or smooth. There are plenty of instances in which genes do seem to be determinative, and work as they did for Mendel’s peas. Hundreds of known diseases, for example, appear to be caused by one or just a few genetic changes that disrupt or destroy a gene in some major way. Examples include cystic fibrosis, muscular dystrophy, and diseases of the nervous system such as Rett syndrome or Tay-Sachs disease. But as a rule, these ‘Mendelian’ diseases are a minority of rare traits that appear early in life, regardless of lifestyle exposures. The success for medical genetics in picking this easy-to-find ‘low-hanging fruit’ hasn’t given us a way to harvest the rest. This isn’t for want of trying. Billions of dollars have been spent on searching for ‘the’ genes ‘for’ such common diseases as obesity, heart disease, type 2 diabetes, stroke, hypertension, cancers, asthma, and countless other afflictions. There have been few notable successes. The frustration is great because for most traits, including most diseases, members of an affected person’s family tend to have increased risk of the same trait or disease, in ways that can’t entirely be blamed on shared environment. This strongly reinforces the DNA metaphor by suggesting that genes must be contributing to risk in important ways. But if so, how can they be as slippery as eels when we try to find them? The reason is that the fabric of genetic causation is probabilistic both in terms of the inheritance of genes, and their effects. The standard ‘scientific method’ we were all taught in school was based on stating, and then testing, a specific hypothesis about what causes some outcome; for example, that mutations in the LDL receptor gene (which affects cholesterol levels) can cause heart disease. However, most studies of such specific hypotheses have come up empty. The growing availability of wholesale DNA sequencing technology, largely initiated by the completion of the human genome project in 2003, led to the widespread abandonment of standard hypothesis-based genetics, to be replaced by what is called ‘hypothesis-free’ genomics. In keeping with the DNA metaphor, the idea of the genomic approach is to assume that genes simply must be causing a trait of interest, and to look across the entire genome to find variants that are more common in individuals with the trait than in those without it. The hope was that we would soon eliminate the debilitating or fatal diseases to which most of us now fall victim, once we had exhaustive knowledge of genome-wide variation. Genomic studies searching for causal genes have grown ever larger and more expensive, but commensurately important results have yet to roll in. Most of the estimated overall genetic influence on the traits or diseases of interest is still unidentified. What we’re finding instead is ‘polygenic’ causation, that is, that many different parts of the genome contribute mainly trivial individual effects. Each genetic variant is a very weak ‘coin flip’ with unstable probabilities, and everyone is flipping a different set of coins A typical well-studied example is Crohn’s disease, an inflammatory bowel disease that runs in families, and thus would seem to have a major genetic component. However, the most recent study, by Heather Elding and colleagues at University College London, published in The American Journal of Human Genetics, estimates that the number of genes associated with the disease is around 200, most with very small effects, which explains only a small amount of the genetic background of this disease. To liken this again to coin-flipping, variants at each ‘causal’ gene affect risk in some probabilistic way, usually very small — far from 50-50 — and with no guarantee whatever that the same variant provides the same risk in different people who carry it, or in different populations, or in men or women, or at different ages. It’s as though each coin keeps changing its probability of coming up heads. Thus, the predictive power of this type of ‘personalised genomic medicine’ is generally very weak, like trying to predict the outcome of hundreds of individual-specific coin-flips. That’s why, with some fortunate exceptions, the clinical or therapeutic value of all these genetic studies has so far been slight. It’s a similar story for normal traits as it is for disease. Height is an easily measured trait that clearly runs in families, and many studies have been done looking for genes for this trait. More than 400 contributing genetic regions, from an estimated 700 or so, have been found but, again, none with very large effects. In fact, to date, only 10 per cent or so of the variation in height has been explained, as a study from Exeter University published in Nature in October 2010 demonstrated. Many more genes will be found to contribute, but environmental factors such as diet or illness will as well. Height and Crohn’s disease are just two of many instances of this same basic pattern. Behavioral and psychiatric traits are proving to be just as intractable, and the story is similar with the same kinds of studies in other species, as varied as yeast, insects, and plants. What is being documented is the blunt reality of the state of nature. No matter how unwelcome it might be for those who still hope for simple deterministic-like genetic causation, complex traits are affected (one should perhaps no longer say ‘caused’) by multiple genes with individually small and typically fickle effects. In addition, nobody disputes that there is usually a hefty, indeed often predominant, environmental component to the risk of disease, although it’s typically not very seriously considered by geneticists. These environmental factors are themselves quite complex and elusive to assess, or even identify. Perhaps the most important single fact lurking in all of this is that when numerous genes contribute to a trait, the specific set of contributing variants is different for every individual. This is a many-to-many causal relationship: there are many genetic paths to a single height, blood pressure, triglyceride, or cholesterol level. Equally, a given genotype is consistent with many different trait values. Each genetic variant is a very weak ‘coin flip’ with unstable probabilities, and everyone is flipping a different set of coins. So, even if we identify the genotype of an individual, we can’t as a rule accurately predict its effects, even though this is just what ‘personalised genomic medicine’ has promised to do. This makes another aspect of the DNA metaphor problematic. Instead of the widespread view of life as raw, relentless Darwinian competition leading to a single ‘fittest’ way to be, a far better way to see it is in terms of cooperation. By cooperation we do not necessarily mean the social, emotional variety. Cooperation describes the way in which a trait is produced by many factors, the countless genes and lifestyle aspects that contribute to the trait. If these factors do not work adequately together, the trait will not successfully be built into an embryo in the first place. Extensive webs of cooperation within us — genes with genes, organelles with organelles, cells with cells, tissues with tissues, and so on — mean that except for the rare disastrous instances, individual contributing genes neither spell doom nor success on their own. If there are many ways to fail — as the rare, serious genetic mutations show — there are a great many more ways to succeed. Another way to view cooperation among genes is that evolution has provided a kind of redundancy that protects individuals from harmful mutations and overly harsh screening by natural selection. If each gene is, in itself, not a deterministic cause of some useful trait, then the organism can often do just fine with modification of or even loss of that gene, because other contributing genes cover for it, or any one modification has only a trivial effect. We know, for example, that many well-known variants that are clearly associated with very serious human disease are the normal state in other species. Indeed, whole-genome sequence studies have consistently shown that all of us carry a significant number of defunct or seriously disrupted genes, and this can include genes whose mutations are clearly implicated in some disease contexts, even if we ourselves are healthy. All this might seem confusing: genes are molecules and hence fundamental causal agents of life, yet their effects are highly probabilistic and very hard to pin down or predict. As we have tried to explain, although genetics and evolutionary research are often very technical, the issues are actually reasonably simple. That’s fortunate, because an understanding of how life and evolution work as an orderly collection of uncertainties can lead us to a better sense of what is ‘inherent’ in our nature, and why. In this light, we can return to the intriguing topic of behavior and, particularly, of free will. What we know about life undermines the explanatory power of molecular reductionism: that is, the attempt to use genetic variants to predict not only physical traits but also higher-level phenomena – such as the ability to do calculus, or write poetry – which seem to ‘emerge’ magically out of nowhere. For scientists attempting to understand life’s complexity, this might be the winter of our discontent, but Richard III’s soliloquy was written in Shakespeare’s hand — not his genome. Complex organisation arises from webs of interaction among causal factors. Even if individual factors cannot be held responsible for particular developments, complex phenomena such as people, skills, skulls, languages, and even football teams clearly do exist, and have a material rather than any mystical or immaterial basis. In fact, emergent complexity takes essentially the same form, and presents the same challenge, in the very different contexts of biology, ecology, anthropology, sport — and free will. But here’s the conundrum we mentioned earlier: if science says that the world is an entirely material phenomenon following universal laws of causation, then even the idea that we are responsible for our thoughts and actions comes under siege. Personality? Intelligence? Criminality? Political preference? You name it — even our moral decisions must, in principle, be predictable from our inherent, inherited genome. Yet, our thoughts and actions seem to be even farther beyond the reach of gene-based prediction than physical traits such as diabetes or height, which we’ve seen to be extremely complex in their causation. Is this just a temporary limit in scientific knowledge, or is something more profound going on here? Perhaps as we are evolved biological organisms, uncertainty is unsettling to us The question is more than incidental, because it raises the rigid idea of a mind/body dualism. Dualism asserts that mind and consciousness, whatever they are, are free from the usual material constraints. In other words, we have free will, just since we feel that we do. Free will is at the heart of assumptions that we are morally responsible for our actions, which in turn affects social and legal policy as well as religious notions of earned salvation. Clearly, if individuals are just the product of their genes, then they can’t be held responsible. Yet, how can they not be the product of their genes? An answer might lie in the understanding of complex causation that we have presented here. We aren’t qualified to deal with religious issues about moral responsibility, but from a scientific point of view there is no mind/body dualism. Mind, wondrous though it might be, is in fact the product of molecular forces, including genes. Yet the mind seems fundamentally unpredictable from genes. The reason is that the brain and its activities are the result of countless billions, if not trillions (or more) of ordinary molecular and cellular interactions of all sorts, each of them probabilistic, from gene usage to the formation of neural connections, beginning before birth and extending over our lifetime’s experiences. During development, our brains are programmed to ‘wire’ up in a very general way, but the details in each individual are the result of experience, and our individual behaviours are the result of our brains responding to our unique set of experiences. We should not be at all surprised that, just like most other traits, behaviour is not specifically predictable from genes. The massive web of probabilism makes such prediction weak at best, just as we’ve seen for physical traits. Our mental activities feel as if they are free, and their unpredictability supports that feeling. But the reason is that the causation involved is so complex and deeply probabilistic that it is, in effect, unpredictable even if we were to try to enumerate all the contributing factors. In that sense, for all practical purposes, we are indeed free. It is sobering to point out that none of these issues about determinism, probability, complex causation — and even their implications for free will — are new. They can be traced back to the classical philosophers, and were vigorously debated along with the development of probability and statistics in the 18th through to the 19th centuries, and then reinforced by discoveries in sub-atomic physics in the 20th century. The significance and challenge of probabilistic multifactorial causation have been recognised. What is new is that we have a much better documentation of this problem from a genetic point of view. But, conceptually, we have not advanced very much in our understanding of what are deeply puzzling aspects of the way the cosmos — including life — works. Human beings don’t like things that are unexplained. We want the comfort and sense of safety that comes from predictability. Perhaps as we are evolved biological organisms, uncertainty is unsettling to us. And, in the scientific era, we assume a material understanding of causation. That’s what the idea of determinism represents in a simple, easy-to-grasp way. We want to be in control, to be able to manipulate nature to alleviate the problems that we face in a finite life in a finite world. We want our causes to be simple, real causes, and that is perhaps why the metaphor of the gene as the atom of causation in life is so easy to absorb, and its subtleties so easy to overlook. We are made very uneasy by things that are only probabilistic unless, as in coin-flipping, we can sense what’s going on. When we can’t see it, and causation is many-to-many, that is far too much for our minds to deal with easily. Yet that seems to be the reality of the world. | Anne Buchanan & Kenneth Weiss | https://aeon.co//essays/dna-is-the-ruling-metaphor-of-our-age | |
Demography and migration | There’s a reason why the Bible is silent about the colour of Jesus’ skin. So why has this become an issue for our age? | Last month, American television audiences were shocked: when Satan showed up in the History Channel’s new mini-series The Bible, he looked strikingly like President Barack Obama. Responses were quick, and they came on all types of media from Twitter and Facebook to CNN and Fox News. Complaints sounded so loudly that the producers of the show were forced to respond, calling it ‘nonsense’ that they purposefully cast the Moroccan actor Mohamen Mehdi Ouazanni as Satan to look like Obama. The controversy hasn’t hurt the ratings for the 10-hour series. With more than 10 million people in the US watching each episode, The Bible has been the biggest cable TV hit of the year. One of the reasons for its popularity is that Americans care deeply about how biblical figures are represented in the flesh. Whether discussing the darkness (and Obama-ness) of Satan or the ‘sexy whiteness’ of Jesus, the ethnic ‘look’ of the characters has been just as important (if not more so) than what they have said or done on screen. This is not the first time US audiences have fixated on the portrayal of Biblical bodies. In 2004, they flocked to movie theatres to watch Jesus tortured and killed in Mel Gibson’s The Passion of the Christ. In that film, Jesus never spoke English, but his brutalised body was on display front and centre. In previous decades, people asked Martin Luther King Jr what Jesus looked like, and during the 1920s, Americans debated whether it was appropriate to show Jesus in films at all. In the Bible itself, bodies matter, but not the way they do now. The ancient texts have sick bodies and healed bodies, pierced bodies and resurrected bodies. But for the most part, the Bible is pretty quiet about the colour of those bodies’ skin or the tone of their hair. To understand our contemporary obsession with the actors’ bodies in The Bible mini-series, we need to consider why something that is so silent in the Bible has become so salient in our approaches to it. Historically, many religious teachers in the US have been keen to downplay the physical characteristics of figures in the Bible, warning that such attention to the merely manifest might divert one from true spirituality. In colonial New England, Puritans differentiated themselves from Catholics by refusing to display Jesus, God, or the Madonna in their churches or on printed materials. Puritans were not absolute in their iconoclasm: they were fine with other representations, and regularly used small figures in educational books. Satan, moreover, was sometimes represented as a horned, winged, and emaciated dark figure (he was, after all, the ‘prince of darkness’). But to see the devil or one of his minions in the flesh was a terrifying experience, and one that could get you executed in the colonies. Moroccan actor Mohamen Mehdi Ouazanni plays Satan in the History Channel’s The Bible mini-series. Photo courtesy Lightworkers Media / Hearst Productions IncThroughout the 19th century, as new technologies allowed for the mass production and distribution of Bible images, some religious teachers worried that they could hinder the mission of the Church. One Presbyterian minister in New York City cautioned his congregants in the 1880s not to trust the imagery of Jesus they saw in picture-book Bibles and on stained-glass windows. ‘It is a remarkable thing in the history of Christ that nowhere have we any clue to His physical identity. The world owns no material portraiture of His physical person. All the pictures of Christ by the great artists are mere fictions.’ Just as it was time for slavery to end, it was also time for women and men of colour to refuse the language and images that associated darkness with evil, and whiteness with good There was a serious theological reason for that minister’s concern: the lack of biblical detail about Christ’s physical features was crucial to the universal appeal of Christianity: ‘If He were particularised and localised — if, for example, He were made a man with a pale face — then the man of the ebony face would feel that there was a greater distance between Christ and him than between Christ and his white brother.’ Instead, because the Bible refused to describe Jesus in terms of racial features, his gospel could appeal to all. Only in this way could the Church be a place where the ‘Caucasian and Mongolian and African sit together at the Lord’s table, and we all think alike of Jesus, and we all feel that He is alike our brother’. The theme of a universal Jesus has been a common response from American Christians to the question of what Jesus looked like. In 1957, Martin Luther King Jr’s advice column in Ebony magazine received a letter that asked: ‘Why did God make Jesus white, when the majority of peoples in the world are non-white?’ King answered with the essence of his political and religious philosophy. He denied that the colour of one’s skin determined the content of one’s character, and for King there was no better example than Christ. ‘The colour of Jesus’ skin is of little or no consequence,’ King reassured his readers, because skin colour ‘is a biological quality which has nothing to do with the intrinsic value of the personality’. Jesus transcended race, and he mattered ‘not in His colour, but in His unique God-consciousness and His willingness to surrender His will to God’s will. He was the son of God, not because of His external biological makeup, but because of His internal spiritual commitment.’ But in a society that separated people based on colour, God’s son wasn’t the only challenge for image-makers: the devil was, too. During the Civil War, one northern African-American, T Morris Chester, had announced that just as it was time for slavery to end, it was also time for women and men of colour to refuse the language and images that associated darkness with evil, and whiteness with good. Nearly a century before Malcolm X gained notoriety for such claims, Chester asked his fellows to wield consumer power to effect change. If, he said, you ‘want a scene from the Bible, and this cloven-footed personage is painted black, say to the vendor, that your conscientious scruples will not permit you to support so gross a misrepresentation, and when the Creator and his angels are presented as white, tell him that you would be guilty of sacrilege, in encouraging the circulation of a libel upon the legions of Heaven’. By refusing the idea of the dark devil, Chester was going up against centuries of Christian iconography. Throughout medieval Europe, it was entirely regular to describe Satan as dark or black. Witches were known for practising ‘dark arts’, and in early colonial America when British immigrants to the New World accused others of being witches, they too conflated darkness with the demonic. The devil was everywhere in Salem in 1692, and he could take any number of physical forms. He did not always come in blackness or redness: Sarah Bibber saw ‘a little man like a minister with a black coat on and he pinched me by the arm and bid me to go along with him’. But most often he did: one witnessed Satan as a ‘little black bearded man’. Another saw him as ‘a black thing of a considerable bigness’, and yet another beheld the devil in the form of a black dog. The devil came as a Jew and as a Native American as well. In The Wonders of the Invisible World (1693), the Puritan theologian Cotton Mather associated Indians and black people with the devil: he wrote that ‘Swarthy Indians’ were often in the company of ‘Sooty Devils’, and Satan presented himself as ‘a small Black man’. Because of America’s history and its contemporary demographics, there is almost no way to depict Bible characters without causing alarm In the 20th and 21st centuries, debates over how to depict biblical figures have grown louder and more contentious. In large part, this is because of the increased importance of visual imagery in US culture. Whether at the movies or on TV, in magazines or on the internet, Americans produce and consume images at a staggering rate. Even in the 1930s, some African-American teenagers who took part in sociological surveys answered the question ‘What colour was Jesus?’ with ‘All the pictures of Him I’ve seen are white.’ That seemed definitive enough. Decades later, when Phillip Wiebe, professor of philosophy at Trinity Western University in Canada, interviewed people for his book Visions of Jesus (1997), a man named Jim Link reported having a visionary experience in which Jesus ‘had a beard and brown shoulder-length hair, and looked like the popular images of Jesus in pictures’. At times, films have tried to avoid controversy by obscuring biblical characters, as in Ben-Hur (1959) or The Robe (1953). In those cases, we see the back or the arm of Jesus, but never his face. At other times, filmmakers have seemed to beg for controversy, such as the casting of the black actor Carl Anderson in the role of Judas Iscariot in the film Jesus Christ Superstar (1973), released just five years after Martin Luther King Jr’s assassination. Questions of race and identity have now become inescapable elements of any public presentation of the Bible. Mel Gibson digitally altered The Passion of the Christ (2004) to transform the actor Jim Caviezel’s eyes from blue to brown — in an attempt to make his Jesus character look more Jewish. But even with this change, and a prosthetic nose attached to Caviezel’s face, some critics nonetheless denounced the film for presenting Jesus as a typical white American man, excluding, as those earlier ministers had worried, the ‘man of the ebony face’. The Bible mini-series is yet another example of how Americans have portrayed Bible characters visually, debated what those characters did or should look like, and discussed whether those figures should be put into flesh at all. The debates haven’t simply been about religion. They have also shown how entangled politics and religion are in America, with questions such as whether President Obama is working on the side of God or the side of the devil. And big money is involved — whether in the form of high ratings and advertising revenue from TV and film aimed at the huge evangelical Christian market, or in the lucrative industries that publish Bibles and tracts depicting, perhaps unwittingly, Jesus and the devil on opposite sides of a racial divide. Because of America’s history and its contemporary demographics, there is almost no way to depict Bible characters without causing alarm. To call Jesus ‘black’ signals political values that are associated with the radical left. In 2008, President Obama’s pastor Jeremiah Wright almost cost him the Democratic nomination because of his claims that ‘Jesus was a poor black man’. However, to present Jesus as white in a society where African-Americans, Asian-Americans, and Latino Americans make up increasing numbers of the population is quickly understood as a code for a conservative worldview. Little wonder, then, that some Americans are choosing to describe Jesus as ‘brown’ as a way to avoid the white-black binary. If one attends an anti-conservative rally in the US, for instance, one is likely to find a poster that reads: ‘Obama is not a brown-skinned, anti-war socialist who gives away free health care. You’re thinking of Jesus.’ | Edward J Blum | https://aeon.co//essays/was-jesus-a-white-man-and-the-devil-black | |
Stories and literature | UFO sightings are down. Ghosts are in decline. Are we more discerning now, or just afraid to trust anything? | One late evening in the early summer of 1981, lying sleepless in my student bedsit at the top of a house in the Fallowfield district of Manchester, I became aware of a pattern of bright flashing lights on the wall. All I could see through the curtainless window on the opposite side of the room was a strip of rather cloudy night sky. The vivid flashing was coming from within, or perhaps behind, a bank of cloud. As I continued to watch, an object materialised from within the cloud, advancing until it stood in plain view in the night sky. It was a strikingly large craft of some kind, flattish but with rounded edges, like an old-fashioned bedwarmer, or perhaps a huge English muffin. It was sparkling-silver and covered all over with a regular pattern of flashing white lights. After hovering for a few seconds, it began to move across the sky, and as it reached the right-hand frame of my window, I leant over the side of the bed to keep it in view. At a certain point it ceased its progress and, at the same sedate pace, retraced its route back to its starting-point. There it lingered for a few more seconds, before retreating into the cloud-bank until its evanescent flashing had entirely dissolved from view. Only then did I collapse out of bed and start frantically pulling on clothes. I rushed on foot to my girlfriend’s place to gibber out an account of the incident. Convention demands the following declaration: I had not been drinking or taking drugs, I hadn’t dozed off and reawoken, and I wasn’t in a general state of agitation. It was a perfectly normal evening: I had gone to bed and was waiting to fall asleep. Nothing remotely similar has ever happened to me before or since. If everybody is entitled to at least one experience of the paranormal or unexplained, this was mine. For the three to four minutes that the whole episode lasted, it filled me with a mixture of trepidation and thrill, with an intimation that there might after all be another reality beyond the everyday one. The classic mise-en-scène for a UFO sighting was a remote, deserted location — country roads or woodland at night, or outside a ranch in New Mexico. A large spaceship hovering above Manchester should have been seen by tens of thousands of people. It wasn’t much after midnight, so there would still have been plenty of traffic on the streets. I followed the local news and talked to everybody I knew about it, but apparently only I had seen it, from my bedsit room in Fallowfield. Years later, when the archive of reported sightings processed by the now defunct UFO desk at the Ministry of Defence went online, I searched through the lists for 1981. There was nothing that resembled my sighting, and nothing at all in the whole of the UK for the month in question, or the months before and after it. The spectacular fulfilled its purpose in shoring up devotion, transporting the soul, training the inner vision on higher things There are no UFOs, and there never were. That, at least, is the official story, and it commands acceptance. There was something reassuring in the notion that the Ministry of Defence took them seriously enough to monitor reports, and perhaps even a trace of disappointment that virtually none of those alleged sightings was left unexplained when the desk closed in 2009. They were all night-flying aircraft, weather balloons, comets, car headlights seen at unusual angles through trees and mist, often by people who had been drinking, or who were half-asleep, or of whom it could be said, in the judicial discourse, that the balance of their minds was disturbed. Some of the famous photographs are of Frisbees. Whatever I saw in Manchester was there in front of me — there remains no doubt in my mind about that, even after 32 years — but I have never worked out what it was. UFO sightings reached their spate roughly within a decade of the release of Steven Spielberg’s spellbinding film Close Encounters of the Third Kind (1977). One good reason to believe there were never any UFOS is that nobody sees them any more. Once, the skies were refulgent with alien craft; now they are back to their primordial emptiness, returning only static to the radio telescopes, and offering the occasional meteor shower to the wondering eye. It isn’t only flying saucers that have receded into history. They are being followed, more gradually to be sure, by a decline in sightings of ghosts, recordings of poltergeists, claims of psychokinesis and the rest, as is regularly attested by organisations such as the Society for Psychical Research in London and the UK-wide research group Para.Science. Many of those with a vested interest in the supernatural industry naturally resist this contention, but there is far less credulity among the public for tales of the extraordinary than there was even a generation ago. The standard explanation attributes this to growing scepticism. But, as is only fitting for the paranormal, it might be that there are more mysterious forces at work. In The Society of the Spectacle (1967), the foundational text of Parisian situationism, the French Marxist theorist Guy Debord argued that consumer culture had acquired the dimensions of an alternative reality: it had replaced the dull, grey world with its own, phantasmatic iridescence. It didn’t matter whether or not everybody genuinely could buy a part of the universal plenty. What mattered was the mythology, the illusion of bountiful possibility and limitless choice, wrapped up in a spectacularity borrowed from the film and television industries. Debord was not the first to remark on this. When the social theorists of the Frankfurt School arrived in New York during their wartime exile in the 1930s, they found the giant billboard ads for toothpaste even more-nerve jangling than they had expected. Here was a culture entirely mortgaged to the secular spectacular. In previous centuries, what was visually remarkable stood for the other-worldly, the spiritual. The baroque façades and soaring spires of cathedrals, the carmines and cobalts of stained-glass windows with the sun streaming through them, devotional processions and carnival parades, gargoyles, misericords, miraculous relics — all attested that there was an intangible reality beyond the physical one, a reality that could at most be suggestively delineated in extraordinary sights. By the time of the European Enlightenment, the sublimity of nature, together with its representation in the bravura period of landscape painting, achieved the same effects. To be sure, there was always an impulse against these manifestations of visual culture. The very fact that they can be seen, and that in some cases they bear the traces of human artifice, tells against their association with the other-worldly. Arthur Schopenhauer at his most biliously saturnine would have none of it. To those who would counter the argument that the world is a dungheap of suffering with the pabulum that there are at least beautiful sights to see in nature, he scoffed: ‘Is the world, then, a peep-show?’ And yet, the spectacular did in fact fulfil its purpose in shoring up devotion, transporting the soul, training the inner vision on higher things. My great-grandmother, a stranger to television, waved her handkerchief as the Queen’s carriage passed by on the small screen Where people were convinced that they had seen the other world impinging on material reality, or were persuaded that others had, the connection between what one could see and what one might believe grew deeper. Materialisations of the Blessed Virgin at Lourdes in France, at Knock in Ireland, and at Fátima in Portugal, suggested that the visions of the first Christians — those who not only saw but spoke to, ate with and touched their risen Lord — were still available for anyone with eyes to see. Similarly, the bodying forth of Roman centurions, headless noblemen, wailing women and whey-faced children, not to mention the ectoplasmic effusions at seances, bore fugitive witness to another dimension beyond the temporal one, a realm to which we were all evidently journeying. We knew this because, for a second or two, in the dead of night, in solitude, every now and then, the odd one of us could see it. The audience members who fled from their seats before the oncoming train at one of the Lumière brothers’ first cinema screenings in 1896 might look, in retrospect, as though they were fleeing in vain from the inexorable onslaught of the spectacular age. That they accepted the evidence of their own eyes turned out to be matter for derision. What motion pictures achieved was a simulacrum of reality, but one in which the world we were watching was unable to see us — an exact reversal of the centuries-long disposition of the sacred and secular realms. As late as 1953, when my family gathered to watch the coronation of Elizabeth II on the BBC, my great-grandmother, a stranger to television, waved her handkerchief as the Queen’s carriage passed by on the small screen. If the growing spectacularisation of media culture began to undermine belief in the spirit world, the widespread dissemination of video technology hastened its decline. Filming is now within the grasp of everybody with a smartphone. Closed-circuit television (CCTV) beadily observes the nothing that is all that seems to happen on deserted night-time streets. Video cameras used to be reserved for the signal events of a life (weddings, anniversaries, birthdays), but now scarcely anything is beneath the attention of YouTube. In the heyday of ghost stories, the elusive grail was a photograph or moving film of some spectral emanation. There should no longer be any technical obstacle to providing this, and yet all we see is the odd whitish blur that could as easily be a mark on the screen. W hat these countervailing powers have brought about in postmodern society is the wrong kind of scepticism. A large element of rationalist doubt certainly accompanies the decline of interest in the paranormal, driven primarily by these cultural and, latterly, technological factors. Yet underlying that doubt itself is the growing incredulity with which people evaluate anything. Supermarket discounts appear to offer wines at half-price; products for smearing on your face purport to make you look younger — these are the all-too-evident mendacities. The homilies of party politicians at election time sound like the exclamatory drivel of PR companies. And the way this stuff has permeated culture as a whole has bred a widespread incurious scepticism. We now extend the same degree of undifferentiating refusal even to those phenomena that, while hard to credit, deserve to be heeded. Climate change might be the most obvious current instance but, at its most noxious, scepticism results in an unwillingness to believe in others’ suffering. The attitude of wholesale rejection, by which one might stand a chance of becoming impervious to fraud, is thus bought at the ever greater risk of nihilism. To Debord’s generation, spectacle culture was responsible for weaving an ineluctable web of deceit around its clients, blinding them to the true nature of reality. In fact, the opposite has turned out to be the case. Notwithstanding its pervasiveness, there is virtually no one who doesn’t secretly know that he is being cheated. Nothing ever quite lives up to its billing. Grandiose claims framed in the hysterical superlative — ‘The most terrifying movie ever made!’, ‘The funniest novel I’ve read all year!’ — carry within them the seeds of their own refutations. So ingrained is this habit of disbelief that it comes to seem as though there is nothing that isn’t part of the scam. True scepticism lies in the considered suspension of belief, the opposite of that state of mind in which, as Samuel Taylor Coleridge suggested, we attend to tales of the otherwise incredible. The patron saint of this true scepticism is Thomas Didymus, or Doubting Thomas, who has erroneously come to be associated with weak-kneed faithlessness, but whose pathos consists precisely in his steadfast loyalty to his late teacher. It is this, after all, that prevents him from accepting what sound like fantastical stories of Jesus’s reappearance. When the resurrected Messiah informs him that it is blessed to believe without having seen, the moral applies to nobody present, as the other disciples have already declined to believe Mary Magdalene. The account is, of course, addressed to succeeding generations, but the salient point is that Thomas doesn’t stand condemned for his fidelity. Even so, the early councils of the Church would not be content with faith as mere obedience to divine precept and the exercise of goodwill to others. Instead, they insisted on a legalistic, faith-based version, in which a weekly statement of what one actually, literally believes is required of its participants. Thus did the Church hand the Enlightenment, and its current crop of science apostles, the free gift of an evidentiary case against itself. The visible and the invisible, the material and the spiritual, the phenomenal and the noumenal are no longer the distinct realms they once were In contrast to Thomas, contemporary scepticism takes the form of playing along with the racket because there seems to be no alternative, while privately knowing that it can’t deliver what it promises. In this cast of mind, somebody’s hyperventilating tale of a translucent wraith seen drifting about the stately home, or the disembodied footsteps that clatter up and down the stairs when everybody is tucked up in bed, is neither more nor less believable than the long-range weather forecast. The dignity of spooky stories was that, unlike obvious tissues of lies, they occasionally managed to cross the divide between the highly unlikely and the just barely credible. If they could never be proved, neither could they ever be disproved — except by pointing to the laws of physics, an alienating language spoken by experts who couldn’t conceal their contempt for ordinary gullibility. Now that so much of the culture of the spectacle evokes the same response, the laws of physics have no greater claim to finality than do poorly produced video-hoaxes on YouTube. The visible and the invisible, the material and the spiritual, the phenomenal and the noumenal are no longer the distinct realms they once were. They have become mutually permeable to their mutual diminishment. We seem to see to the heart of things, to what Kant knew as the thing-in-itself, to a degree undreamed of at the high-water mark of pure reason in the 18th century. The cameras of natural history programming miss nothing, even at the cellular level, even in pitch dark, and yet everything looks like the video that it is. There are those who continue to believe the Moon landings were a hoax just because the film evidence looks so fake, and could so easily have been produced in a studio. By contrast, the notorious black-and-white alien autopsy footage from Roswell, New Mexico is an insultingly obvious fraud, as educated people reassured each other at the film’s emergence in 1995, having forgotten for a moment that the absurdity lay not in the cinematography but in the very idea of a humanoid space-creature. Seeing and believing used to belong together only when they occurred in the mass. When individuals claimed to have seen something extraordinary — a man with two heads, the Niagara Falls, tombs in the desert crammed with gold — their testimony was a challenge to credulity until it was demonstrated to be true. René Descartes undertook ‘never to accept a thing as true until I knew it as such without a single doubt’. Doubt was the primary basis for the rationalism of the Enlightenment, so the first witness to the miraculous had to be seen to have seen. In the age of electronic mass media, when so much flashes around the world instantaneously, when video clips, in a telling usage, ‘go viral’, there should be no doubt about what is real and what isn’t. Yet the critical mass is no longer critical. There is an air of the semblance, of ‘facticity’, about what we are urged to look at. The very fact that it is shrieking for public attention tends to speak against it. A couple of years ago, I saw a documentary about the UK’s dwindling UFO sightings. Various people who had reported them in the past were invited to relive their experiences, often going back to the very places where the incidents had taken place. Some of the interviewees were still as unshakeably convinced of the concrete reality of what they had seen as they were at the time, though the thrust of the programme was towards likely explanations, set against the general cultural fascination there once was in the idea of alien civilisations. One man had seen a mysterious object in the sky, some time (if memory serves) in the late 1980s. He had drawn a sketch of it soon after. Hearteningly enough, it was identical to mine. | Stuart Walton | https://aeon.co//essays/why-have-we-stopped-seeing-ufos-in-the-skies | |
History | If my grandfather could survive the Siege of Leningrad and still distinguish between a German and a Nazi, so can I | It has been seven years now since I moved from Russia to Germany, and not a month goes by without me landing in the same predicament. A party is in full swing. The plates and bottles are half-empty, the hurdle of small talk has just been overcome, and everybody is in pursuit of one, imperative theme to seal the evening. Then somebody turns to me and asks: ‘How do you, a Russian — a Russian Jew, to be precise — feel in Germany?’ My friends, my relatives and total strangers — all seem to assume that I should feel uneasy in the country that initiated the Holocaust and, in 1941, invaded the USSR. ‘How is it possible,’ Germans want to know, ‘that you do not hate us?’ And my Israeli and Russian friends ask: ‘How is it possible that you do not hate them?’ For a long time, I didn’t have an adequate response to this question. I could urge forgiveness for Germany’s past, pointing to the admirable features of its present, but this sounded like an apology at best, and evasiveness at worst. Love makes all justifications sound unconvincing, and what keeps me in Berlin is certainly love — an inexplicable attachment to its bleak architecture, its sour humour, its dialect, flat and broad like the Prussian landscape. For years, something crucial, something pivotal that would free me from having to defend my choice, was forcing itself through the wire of small talk. But it never managed to get out. Then, one day not long ago, I found the answer, and it came from a most unexpected source. My grandfather, it transpired, had kept a diary during the Nazi Siege of Leningrad (now St Petersburg), in which at least 750,000 people died from starvation and bombings between 8 September 1941 and 27 January 1944. His diary had been shut away in a black wooden box for almost 70 years. Immediately after the war, it would have been simply dangerous to expose its contents. The ideologues of the Soviet state were busy constructing their own heroic narrative of the Blokada, depicting Leningraders as a single integrated organism, withstanding the atrocities of a German invasion with their heads held high. Great effort went into suppressing all accounts that might challenge the official story. Oral histories and private documents were only tolerated to the extent that they confirmed it. Joseph Bassin, my mother’s father — 27 years old, plump, Jewish and bespectacled — found himself trapped in the Siege by pure chance None of this is to deny that heroism and self-sacrifice were plentiful. The stamina of Leningraders is not to be gainsaid. However, details about the ruthlessness, cruelty and neglect that inevitably accompanied the fight for survival were buried for years. Thousands of Soviet people grew up with a sterilised notion of the Siege. Even in families of survivors, children often learnt nothing but the publicly acceptable narrative: either their parents wanted to protect them from the horrors of what they witnessed, or they had been taught to keep their mouths shut. It was only with the onset of glasnost in the late 1980s that shocking stories from the Siege began to emerge. Marauding, looting and even cannibalism were all revealed, as well as treason and venality on the part of the Party authorities. The first page on Joseph Bassin’s siege diary. Photo courtesy of the author.By that time, my grandfather was already dead. Another Siege survivor in my family — Frida, my grandmother on my father’s side — refused to discuss the months she spent in surrounded Leningrad, even though she had an extremely ready tongue for all other occasions. I grew up with the understanding that the Siege was not to be mentioned at home. Like my parents, I learnt about it from books. And then, last year, Frida died. The vow of silence that was meant to protect her from her memories could now be broken. My mother and I decided to climb the ladder in the home we had inherited from Joseph and remove that black wooden box from its place on the top of a bookshelf. Inside was a thick notebook with an oilcloth cover, filled with neat handwriting. It was absolutely undamaged. The title on the first page read ‘Notes on the War’. An aphorism from Goethe was scribbled below: ‘The one who writes the chronicles of the past, is the one who is willing to comprehend the present.’ An apt quotation, as it turned out. On a bright September evening in 1941, huge puffs of something white floated in the Leningrad sky: ‘For simple clouds they were too gorgeous,’ my grandfather observed, watching as they turned scarlet in the setting sun — ‘a stunning, but menacing sight’. He was right to worry: hanging over the city was a cloud of dust from Leningrad’s grocery warehouses, which were burning in shell fire. A mix of flour and sugar hovered over the city all night. In the morning, the Baltic wind blew it away. Nothing but several square miles of burnt soil, permeated with grain shreds and oily syrup, was left to feed a city of three million for the coming months. My grandfather watched the blaze from the window of a crowded evacuation train, squeezed between hundreds of people: The air was muggy, stinky and oppressive; it reeked of flatulent refugees engorged with bread and beans. It was terribly filthy, and worst of all, there were lice everywhere. An old man in my compartment was infested with at least a million fat, shiny parasites, but it didn’t matter how much we begged him to move away into the corner, he only grunted and did nothing at all. Worse than the dirt and stench was the overwhelming uncertainty. The news on the radio was vague. Nobody seemed to know exactly where the Nazi troops were, not even the railway administration. The train changed its destination every day, sometimes spending hours in the middle of a field waiting for the Luftwaffe’s attacks to pass over. The whole situation was extremely vexing to my grandfather: he never meant to be on that train. He knew that his place as a Communist and as a family man was elsewhere. Indeed, Joseph Bassin, my mother’s father — 27 years old, plump, Jewish and bespectacled — found himself trapped in the Siege by pure chance. He wasn’t even from Leningrad. He had been born in one of those Jewish settlements in the west of Russia of the kind that Marc Chagall painted in the 1920s, before they were destroyed by collectivisation. In his youth, Joseph broke with the religious traditions of his family, joining the Communist movement and enrolling in a technical university in Moscow. There he discovered a new family — a generation of revolutionary young Jews like himself, ambitious students and believers in social progress. One of them, a doctor named Bella, he married. By his mid-twenties, Joseph was a director of a small printing plant in Vyborg, 90 miles north of Leningrad. And that’s where he was when the war began. In early autumn 1941, he evacuated his plant and sent his pregnant wife away from the fighting. She got out in the nick of time. Local party bosses were too busy taking care of themselves. ‘They panicked first and were the first ones to send their property away,’ Joseph observed. Everyone else had to wait. The transport for Joseph’s workers and their families was assigned at the last minute: a 60-ton half-wagon meant for the transportation of dry goods, with no roof. Loaded with printing presses and 50 people — the pregnant Bella among them — the train finally left for the east one September night, heading towards Kuibyshev, on the shore of the Volga. Joseph stayed behind to file evacuation papers with the Party Committee. He thought it would only take a few hours, but the next morning, trying to catch up with his convoy by car, he discovered that there was no longer any way out. Only one road remained open and it led to Leningrad. Having no alternative, he turned the car around. Little did he know that the Nazi Ring was closing behind him. Joseph wrote: ‘One can stay human only under human conditions — or one turns into an animal’ In Leningrad, he found himself drifting aimlessly from one refugee camp to another, boarding dozens of trains that went nowhere. He was losing time. It wasn’t only the fate of his wife and their unborn child that worried him; the plant was weighing on his mind, too. There were important papers in his briefcase — the kind of papers without which no printing press could be assembled, no worker paid a salary. Joseph cursed the Party bosses for procrastinating, but no matter how many times he told the authorities that he was a director, it didn’t help. After trying for six weeks to break through the front line, he resigned himself to staying in Leningrad. It might have been worse: his wife’s relatives lived in the city, and his younger sister had just started to study there. Thus the Siege clasped him in its arms for the months to come. During those first October weeks, Joseph was bored. No letters came: not from his parents, neither from his wife, and not from his friends at the front. He longed for news so badly that he started hallucinating: ‘A neighbour has just passed by and announced: “Mail for you! Five letters!” I have jumped to the ceiling, but she only brought a newspaper. False alarm. My heart is still beating like mad.’ The hunger was already setting in but my grandfather didn’t feel affected by it yet. His primary concern was to make himself useful. He hovered around the city, frustrated and impatient to contribute to ‘the Victory’. He applied to the army several times but was turned down because of his high qualifications. A Communist in his heart, he felt he was involuntarily betraying the Motherland. On 24 October 1941, Joseph wrote: Among many other street posters I see every day, there is the one with a Red Army soldier holding a gun in one hand and pointing to you with the other. ‘What have you done for the front?’ he is asking. And indeed, what have I done for the front, particularly in the last few days? In the first two or three months of the war I was still in charge of a plant that published weapon manuals, propaganda leaflets and other useful print material, but now I am realising my worthlessness. A few days later, local authorities assigned Joseph the job of political instructor in the Leningrad Red Cross. The role did provide him with food stamps, but it didn’t make him feel much better about himself. As the Siege progressed, Joseph’s sense of normality kept stretching like a piece of grey rubber, accommodating ever more forms of privation. ‘The shells are exploding somewhere very close, perhaps in our street, but I don’t care any longer,’ he wrote. ‘I got used to it and am so fed up that I think death itself would be better than this permanent lingering.’ This dull, rubbery reality kept expanding for weeks. Then, one day, it snapped. He read it first in the local paper, and the diary records his shock: ‘52,000 Jews — children, women, men — were murdered by Germans in Kiev. Not 52 Jews, not 520; 52,000!!!!’ He went on: I want to avenge my mother’s tears, Bella’s sufferings, everything I have seen. The shells exploding outside, they are calling me into battle… In the future someone may ask me what I have done for the victory. And I will answer that I was a Red Cross instructor. It is an important job, of course, but any other educated person could do it equally well. A woman, for example. No, no, as for myself, I am going to the army recruitment bureau tomorrow. Did Joseph hate the Germans while he was writing these words? And shouldn’t this excerpt from his diary instruct me that I too should hate them, 70 years later? Such conclusions are tempting in their logic — but they are wrong. Joseph’s language alone suggests it. Everywhere in his diary and particularly at this spot he makes a great effort to distinguish between ‘Germans’ and ‘Nazis’. After referring to the enemy as ‘Germans’, he actually corrects himself: They are not even cannibals, not even animals. They are simply Hitler’s Nazis. Nothing else — just Hitler’s people, a special breed. Joseph, a Communist Jew, grew up during an era that aspired to global revolution, and German socialists were in its avant-garde. It appears to me that his consistent attempt to avoid the word ‘Germans’ meant that he remained true to the pre-war ideals of international solidarity, in spite of everything. For Joseph, watching his language was a matter of honour: a matter of differentiating between a German such as the communist Karl Liebknecht and one such as the Nazi Heinrich Himmler, between the Germany he loved and the ‘special breed’ of creatures that took charge of it. True to his word, Joseph applied to the army again and was refused again. ‘We’ve got more need for you here,’ they told him. Having no alternatives, he remained in Leningrad. That year’s winter was extremely cold, sometimes reaching below -40C. The city started to fall apart. There was no food, electricity or fuel. The trams stopped. Then, so did the plumbing: Whereas earlier one could get water in the laundry, in a house next door or, in the worst case, in the next street, now all of this has become impossible: there is no water anywhere. Somewhere in deep cellars — unfortunately, very far from us — it still continues to drip, and there are queues of 200 or 300 people waiting. All of it taking place in -30C frost. [We] queued from 9am till 2pm and had to return home without water: the dripping ceased before we got to the tap. Getting water from ice-holes in the city’s frozen rivers and canals became one of the most dangerous winter ordeals: dragging heavy buckets, emaciated Leningraders often slipped, fell and died, freezing, too weak to get up. The streets were littered with corpses, too. Joseph wondered how many he saw each day: ‘Perhaps 30 per day, but it could just as easily be 60.’ It wasn’t just the city but human dignity itself that crumbled under the pressure of unprecedented suffering: a particularly shocking discovery for Joseph, who honestly believed that Socialism was capable of creating a new man — one who remained decent under all circumstances. Referring to a famous statement by the illustrious Soviet writer and ideologist Maxim Gorky, he observed: You said, ‘A Human — that sounds dignified.’… You probably did not mean that young man, well-groomed and wearing a pin-striped suit whom I saw licking — yes, licking! — his plate in the cafeteria today? And, perhaps, not that old woman who hovered around and picked up bread crumbs from the floor, like a chicken. Not pieces, not bits — crumbs! Worse was to come. By mid-January 1942, there were no more plates to lick and no more crumbs to pick up. Death itself turned into a subject of trade. Some were taking advantage of the corpses: a colleague told Joseph she saw a female body with buttocks carved out with a knife. The others engaged in a different economy — saving their own fading energy: A man stood on the staircase and watched the policeman complete a death protocol. The policeman said: ‘With this paper you must go to the City Health Department and get a death certificate. With it, you will go to the registrar’s office and get a burial permission, and then you can bring the corpse to the cemetery.’ The man watched him and contemplated for a minute. I could read his thoughts: ‘How can I possibly run all these errands alone? Where will I get a coffin? Who will dig a grave in this frost? Who will help me?’ After a pause, he turned to the policeman and said: ‘Look, the other one is about to bite the dust, too. She’ll be dead by the evening. I’ll wait till tomorrow and have the papers for them both done.’ This is dignity, this is decency! One can stay human only under human conditions — or one turns into an animal. Family remained the only place where Joseph encountered kindness, compassion and the ability to reason. He found these especially in his sister, an emaciated student. He loaned her his pass to his workplace cafeteria, because the food was slightly better than at her college. She insisted on sharing her portion with him, and wept that he wouldn’t come to eat with her. ‘She does not listen when I try explaining to her that I am doing much better than herself,’ he wrote. He learned to treasure spontaneous mercy more than any kind of doctrinaire heroism Indeed, compared to other Leningraders, Joseph led a ‘luxurious’ existence by his own admission, getting 350 grams (12 oz) of bread per day, a few spoonfuls of watery porridge and something called ‘intestine mince’ — a substance ‘looking like chopped herring and smelling of rotten fish’. ‘I feel ashamed to say it, but I am only a human being, nothing more, and this is why I still keep my job at the Red Cross,’ he wrote in mid-January 1942. ‘To drop this job would be morally correct, but it means starvation.’ By February 1942 it had become apparent to Joseph that everyone in the city would die soon, including him. If hunger didn’t finish them off, then they could count on the epidemic fumes rising from corpses in the streets and from human waste frozen into the ice. Leningraders, it had become clear, were left to their own devices. There was no longer anywhere from which to expect help. Radio broadcasts stopped. Even those last bits of rationed bread — ‘400 grams for manual workers, 350 for clerks, 250 for the unemployed and children,’ as Joseph’s entry on 25 January 1941 noted — had vanished. And yet, at the bottom of this descent, a choice was waiting for him. After many months of dismissing his every effort to ‘contribute to the Victory’, the authorities had finally heard his plea. They offered him the directorship of a large printing-shop, an offer that an engineer of his rank could only dream of, not to mention a task worthy of a Communist. By accepting it, Joseph would become responsible for printing Leningradskaya Pravda, the one remaining newspaper in the city; the very newspaper, in fact, that had brought him news of the Holocaust three months earlier. Yet at the same moment, a long-awaited evacuation permit arrived. This would be Joseph’s last chance to break out to his wife, to his newborn child, and to his own plant. My grandfather spent several days contemplating. He experienced the ultimate, abysmal loneliness of a person about to determine his fate for the rest of his life. His sister, he knew, would be taken care of by relatives. But what about him? ‘I feel terrible pain,’ he wrote. ‘It is very difficult. And there is no one around I can share this feeling with. I am a lonely man.’ The next day, he reached his decision. He would leave. Joseph Bassin departed Leningrad on 3 February 1942. Climbing on to the evacuation train towards the ice road on Lake Ladoga, the only route out of the city, he had even fewer chances of getting to his destination than in the previous September. A German shell might strike them; a crack in the ice was enough to sink a lorry full of refugees: such disasters happened all the time. When it came to it, though, Joseph found himself concentrating on a different kind of horror — that of human ruthlessness: The train stopped at some passing loop where dozens of railways met. Here we were meant to get into the lorries… Corpses drawn into a pile, undressed and barefoot, looked particularly terrible here, in that place where hunger sufferings were meant to end. But the scariest thing was that everyone, including myself, tried not to notice them and did not really care any longer. We just passed by these wax figures, trying not to look. Then I saw a girl of about 17 years old. She was leaning on the wall, howling. Apparently she was about to die of hunger. A short, sly-looking fellow walked up to her and shook her on the shoulder. ‘Verka!’ he shouted, but she did not reply. She was still breathing. The boy leaned over her and said impatiently, ‘Come, get done already!’ I stood and wondered what he meant. Then the boy shrugged his shoulders, grumbled something and started pulling the girl’s warm boots off. He himself was wearing ankle shoes. I tried interfering and told him he was a bloody son of a bitch, that she was still alive! ‘So what, uncle,’ he replied, ‘She will die in half an hour anyway, and my feet are cold.’ I was so lost that, when I got back to my senses, the girl was already dead. My grandfather’s diary ends here. But we know that he made it safely across the lake and joined his family a few weeks later. Closing the notebook, I felt that its value was not in revealing some unknown details about the Siege: in that respect, Joseph’s diary added only little to what I, a child of the glasnost era, had already known. Instead, its power lay in how it revealed universal truths about kindness, heroism and the capacity to forgive. My grandfather’s story started with one evacuation train and ended with another. Between those two journeys, the traveller himself had changed. The man who entered the lice-infested wagon at the beginning of the journey was irritated, but confident: he knew right from wrong and never doubted that he had his own role to play in the ultimate victory of Communism over Nazism. Yet his convoy went nowhere. Tortured by famine and uncertainty, somewhere around the middle of the way, he watched himself and his fellow passengers turning from exemplary Soviet people into, as he put it, ‘human beings and nothing more’. After months of neglect from the system he believed in so fervently, suffering absolute forlornness and alienation, my grandfather realised at last that he was not a cog in an omnipotent mechanism. He was a sliver in the flood. Kindness and compassion, as well as cruelty and selfishness, were not characteristics of social strata, nations or parties; they were integral parts of human nature. Climbing the last train was a man who no longer believed in dogmas — he wasn’t even sure of himself. But he had learned to treasure spontaneous mercy more than any kind of doctrinaire heroism. Joseph Bassin in 1945. Photo courtesy of the author.After joining the front in the autumn of 1942 and meeting the Victory in Czechoslovakia, Joseph went for professional training to Leipzig — a city in East Germany that was one of the world’s centres of the printing industry. Firing up the presses in shelled-out plants, he must have had frequent cause to ponder that damned question that I have heard so many times: ‘How do you, a Russian Jew, feel in this country?’ I do not know his response. But having read his diary, I am quite sure it did not involve any lust for revenge. After everything he had been through, the sheer possibility of peace, of walking on German soil as an equal, of toiling alongside Germans instead of pointing a gun at them, was a breakthrough. He had survived through the era of violence into the era of reason and compassion, and circumstances had taught him to appreciate these qualities above all others. My grandfather’s transformation from a man of ideology to a ‘simple human being’, and his liberation from the dogma of totalitarian thinking, is the greatest inheritance he could possibly leave for me; an inheritance that lay hidden for 70 years in a black wooden box. Dismissing it, turning down the privilege of loving Germany and choosing to hate it instead, as I am often expected to do, would be an insult. Not an insult to my German contemporaries, but to the memory of Joseph Bassin, survivor of the Leningrad Siege. | Polina Aronson | https://aeon.co//essays/how-could-a-russian-jew-not-hate-the-germans | |
Addiction | While one person dabbles in drugs with few ill-effects, another will become a chronic addict. What’s the difference? | Sarah started using heroin when she was 16, and soon after that she left home to live with her dealer. Heroin was one of the ways he had power over her. He was older than her, and often unfaithful. Over the three years that they were together, they frequently fought, sometimes violently. She would end up staying with friends or on the streets. She would steal to get money for heroin until he convinced her to return, partially through the promise of more drugs. Eventually she was arrested for shoplifting and sent to prison. Her boyfriend ended the relationship while she was in custody. In response, Sarah cut her wrists. It began a lasting pattern of self-harm through cutting. Sarah received treatment for her addiction in prison, and had frequent contact with mental health professionals, but she has never successfully gone without heroin for more than a few days, despite repeated efforts. She funds her habit through state benefits, loans from her mother, and theft. Her father died when she was three. Her mother raised her on her own, working two jobs to make ends meet. Her mother was and is her only stable source of support. Sarah hates herself deeply. This is a fictional case study, based on the real addicts I come across in my work. But when you picture Sarah, who do you see? One person might imagine a violent and depraved young woman, who has chosen to live on the edge of society and is responsible for her drug use and crimes. Another will see a suffering soul, someone who can’t control her desire for heroin and can’t be held responsible for the harm she perpetrates on herself or others. Of course, both images of addiction are stereotypes that a moment’s reflection should dispel. They polarise and capture our collective imagination. In reality they stop us from facing hard truths about why people become addicts. I am a philosopher and I am also a therapist within the National Health System in the UK, and I often see patients like Sarah. They suffer from a range of related conditions, not only addictions but also anxiety, mood and personality disorders. Before I started working clinically, I had known people who had ‘problems’ with drugs and alcohol, and come across popular stereotypes of addicts in books and films. I didn’t know how to break the hold of these opposing images of addiction: the addict as perpetrator or as victim. But actually working with people who suffer from these conditions has started to teach me how to see beyond the stereotypes. What I began to realise was that most chronic addicts are not just addicts. They also suffer from other psychiatric disorders and come from backgrounds of adversity and deprivation, both economic and emotional. We need to think seriously about what drugs and alcohol do for people who find themselves in these circumstances. Addiction is a burden to us all, not just to addicts. It’s associated with violent crime both in the home and on the street. Forty per cent of violent crime in the US, according to the Bureau of Justice Statistics on Alcohol and Crime, is committed under the influence of drugs or alcohol. Then there are the economic costs of addiction, of drug and alcohol related crime and policing, social and psycho-educational initiatives, and medical treatment. And finally there are the terrifying personal costs of addiction – lives dominated by drugs and alcohol, at the expense of work, friends, family, and the addict’s own sense of self-worth. Addiction also affects the addict’s friends and family, who suffer alongside the addict in frustration and sorrow as they helplessly watch someone they care about destroy their life. If we focus on the associations between addiction, violent crime and the socio-economic burden to society, it may be the depraved criminal that we’re more likely to imagine. This image is bolstered by historical attitudes towards addiction. For a long period in Western culture, addiction was considered a moral failing, a sign of having succumbed to the temptations of pleasure, sloth and sin. If, however, we focus on the personal costs and its effects on relationships, we’re more likely to see the suffering soul. This image of addiction is relatively recent, bolstered by our contemporary understanding of addiction as a brain disease, diagnosable by both physical and psychological symptoms. Addicts are often in denial about these symptoms, and refuse to acknowledge that they have a problem. Physical symptoms of addiction include both an increased tolerance to drugs or alcohol (so that more and more needs to be consumed to achieve the same effect) and unpleasant withdrawal symptoms if the addict stops. Psychological symptoms include cravings and an all-consuming focus on obtaining and using drugs or alcohol, alongside persistent, unsuccessful attempts to control use. More often than not, however, as time goes on and the addiction and its effects worsen, the addict becomes intensely aware of their problem, but continues to use drugs and alcohol in spite of this knowledge. It’s a cliché that the first step to recovery is to acknowledge your addiction, but it’s a cliché that doesn’t quite ring true. Although the brains of addicts are indeed affected by long-term drug use, this doesn’t mean that addicts have no control Chronic addicts continue to use drugs despite attempting to control their use, and in spite of recognising the effects of drugs on their lives. So it has come to seem natural to think of addiction as a brain disease. Normally, when people know that their actions have destructive consequences and that they can act to avoid these consequences, they do. Yet this is precisely what addicts don’t do. It’s understandable that we try to explain this puzzling behaviour by saying that they simply can’t. Given that we know that long-term drug and alcohol use affects the brain, altering many underlying neural processes involved in motivation and action, it’s easy to jump to the conclusion that the brains of addicts have been ‘hijacked’ by drugs — it’s their brains that make them do it. This makes it look as if addicts really can’t control their use or be held responsible. The problem is that there is considerable clinical evidence that addicts can control their use. Although the brains of addicts are indeed affected by long-term drug use, this doesn’t mean that addicts have no control. Throughout human history people have used drugs and alcohol, sometimes in truly astonishing quantities, both for pleasure and in the belief that these substances provided physical and psychological health benefits. There is evidence of opiate use as long ago as the Neolithic period. In ancient Greece, drinking to excess was condemned, but opium was used to help people sleep, provide relief from pain, sorrow, and disease, and possibly even soothe colic in infants. By the early modern period, laudanum, a mixture of alcohol and opium or morphine, was viewed as both a wonder drug and the height of sophistication, prescribed by doctors to the aristocracy for a range of ailments and in high doses. In the late 19th and early 20th centuries, pharmaceutical companies marketed heroin as a cough suppressant, alcoholic syrups for the nerves, and cocaine for toothache. We still use drugs and alcohol for much the same reasons. We take stimulants such as coffee and tea to keep us alert and focused. We have an alcoholic drink after a stressful day to help us relax. We take opiates for pain, which are available as codeine over the counter as well as on prescription. Almost all of us have tried alcohol and cigarettes, and 50 per cent of people in the US and UK have tried illicit drugs at least once. Many of us have had ‘problems’ with drugs or alcohol at some point in our lives, although it is only a small proportion of users who qualify as full-blown addicts. According to the US National Survey on Drug Use and Health about 5 per cent of those who have tried drugs or alcohol at least once become alcoholics, and between 2 per cent and 12 per cent become addicted to illicit drugs, depending on the choice of drug (rates for cocaine addiction are at the low end, while rates for heroin addiction are at the high end). Usually this happens in late adolescence or the early twenties. But most of those people who are addicted as young adults have kicked the habit by their thirties — on their own initiative, without psychiatric treatment or clinical intervention. It seems that they ‘mature out’ of addiction as the responsibilities and opportunities of adult life take hold. Such large-scale, spontaneous recovery would be surprising if addiction truly was a brain disease that destroyed the addict’s capacity for controlling drug use. Rather, it seems that addicts who ‘mature out’ are able to stop when they have strong reasons to do so. The minority who never overcome addiction typically suffer from additional psychiatric disorders and come from a background of adversity and poor opportunity Twelve-step programmes and most other forms of treatment for addiction also require addicts to decide to stop and then see that decision through. What they offer is community support, practical tips, sympathy and understanding throughout this process, in order to help the addicts resist temptation, however strong or persistent. The effort and difficulty that abstinence costs addicts should never be underestimated, but there is no treatment that can do this work on behalf of addicts or obviate the need for them to do it themselves. One of the most encouraging new treatments actually offers immediate but small monetary incentives for abstinence to help addicts remain clean. Contingency management treatment (CM), which is widely used in the US but has been tried in the UK only in recent years, offers vouchers, money, or small prizes to addicts who produce clean urine samples. Samples are submitted three times a week, with increasing monetary value offered as a reward for each clean sample. Whether you approve of the ethics of this treatment or not, CM significantly reduces the risk of disengagement from treatment, and increases periods of abstinence compared with other treatments. The majority of addicts can control their use when they have a powerful enough reason, and are able to choose to quit if they want to. It’s only a small minority for whom addiction is a chronic condition — something they never overcome, and might even ultimately die from. Who is this minority? Strikingly, they are usually people like Sarah, who suffer not only from addiction, but also from additional psychiatric disorders; in particular, anxiety, mood and personality disorders. These disorders all involve living with intense, enduring negative emotions and moods, alongside other forms of extreme psychological distress. Moreover, these disorders are in turn also associated with various forms of adversity, in particular low socio-economic status, childhood physical and sexual abuse, emotional neglect, poverty, parental mental illness and parental death, institutional care, war and migration. There are, of course, individual exceptions to these large-scale generalisations. Yet the minority who never overcome addiction typically suffer from additional psychiatric disorders and come from a background of adversity and poor opportunity. They are unlikely — even if they were to overcome their addiction — to live a happy, flourishing life, where they can feel at peace with themselves and with others. Most of us use drugs and alcohol to some extent or other, and it is a slippery slope from socially sanctioned use to addiction A now infamous experiment called ‘Rat Park’, conducted by psychologists at the Simon Fraser University in British Columbia in the 1970s, offers some explanation as to why this minority doesn’t overcome drug addiction. Caged, isolated rats, when addicted to cocaine, morphine, heroin and other drugs, will self-administer in very high doses, foregoing food and water, sometimes to the point of death. But when placed in a spacious, comfortable, naturalistic setting, where both sexes can co-habit, nest and reproduce, these rats forego drugs and opt instead for food and water, even when they experience withdrawal symptoms. Recent well-controlled experiments support this early finding. The majority of addicted rats will choose not to self-administer drugs if provided with alternative goods. In other words, if we give addicted rats the option of a happy, flourishing rat lifestyle, they take it. But the minority of addicts for whom addiction is a chronic condition are not given the option of a happy, flourishing human life just because they stop using. In the meantime, using drugs or alcohol might offer some relief from life’s miseries. This function is common parlance in our culture: we ‘reach for the bottle’, ‘drown our sorrows’ or get ‘Dutch courage’ when in need. For addicts who suffer from additional psychiatric disorders, drugs and alcohol offer a way of coping with the extreme psychological distress, as well as an escape from the broader hardship of life. And of course, distress, and hardship are made worse by the addition of addiction to their list of struggles. Yet without any real hope for a better future, there is unlikely to be any genuine long-term incentive to give up the short-term relief on offer through using drugs. To return to Sarah’s story, imagine what her life would look like without heroin — the emotions and moods she must live with, the loneliness and anger, the self-harm, the problematic relationships, the utter lack of any self-esteem or hope for the future. Does this life give her any reason to quit? So, which of the images of addiction is ultimately real — the depraved criminal or the suffering soul? The answer, of course, is neither. Addicts do not simply suffer from a brain disease that removes control. They are responsible for making choices that affect their own lives and those of others, sometimes in truly terrible ways. But, given the social, economic and psychological truths of what life is like for most chronic addicts, we should pause before judging them harshly for continuing to use. To stop using drugs and alcohol once you are addicted is very, very hard, even for those who are highly motivated to do so and who have a wealth of alternatives and opportunities available. For those who do not, why would they choose to endure the hardship of quitting, alongside all the other hardships they face? Both stereotypes of addiction cast the addict as an outsider — different from the rest of us, by choice or disease. But the truth is that most of us use drugs and alcohol to some extent or other, and it is a slippery slope from socially sanctioned use to addiction. Rather than condemning or pitying addicts, we should ask ourselves who we would be, and what choices we would make, if we had the same personal histories, or suffered the kinds of psychiatric disorders associated with addiction. The solution to the problem of addiction cannot rest only with addicts themselves. It rests with all of us, as a society, in how we fight poverty, protect children from growing up in harrowing conditions that predispose them to addiction and other psychiatric disorders, and respond to the suffering of fellow human beings — even those who make poor choices, for which they themselves are responsible. | Hanna Pickard | https://aeon.co//essays/is-the-addict-a-depraved-criminal-or-a-suffering-soul | |
Ecology and environmental sciences | Geoengineers are would-be deities who dream of mastering the heavens. But are humans the ones who are out of control? | At a small conference in Germany last May, I found myself chuckling at the inability of the meeting organisers to control the room’s electronic blinds. It’s always fun when automated technology gets the better of its human masters, but this particular malfunction had a surreal pertinence. Here was a room full of geoengineering experts, debating technologies to control the climate, all the while failing to keep the early summer sun’s rays away from their PowerPoint presentations. As the blinds clicked and whirred in the background, opening and closing at will, I asked myself: are we really ready to take control of the global thermostat? Geoengineering, the idea of using large-scale technologies to manipulate the Earth’s temperature in response to climate change, sounds like the premise of a science fiction novel. Nevertheless, it is migrating to the infinitely more unsettling realm of science policy. The notion of a direct intervention in the climate system — by removing carbon dioxide from the atmosphere, or reflecting a small amount of sunlight back out into space — is slowly gaining currency as a ‘Plan B’. The political subtext for all this is the desperation that now permeates behind-the-scenes discourse about climate change. Despite decades of rhetoric about saving the planet, and determined but mostly ineffectual campaigns from civil society, global emissions of carbon dioxide continue to rise. Officially, climate policy is all about energy efficiency, renewables and nuclear power. Officially, the target of keeping global temperatures within two degrees of the pre-industrial revolution average is still in our sights. But the voices whispering that we might have left it too late are no longer automatically dismissed as heretical. Wouldn’t it be better, they ask, to have at least considered some other options — in case things get really bad? This is the context in which various scary, implausible or simply bizarre proposals are being put on the table. They range from the relatively mundane (the planting of forests on a grand scale), to the crazy but conceivable (a carbon dioxide removal industry, to capture our emissions and bury them underground), to the barely believable (injecting millions of tiny reflective particles into the stratosphere to reflect sunlight). In fact, the group of technologies awkwardly yoked together under the label ‘geoengineering’ have very little in common beyond their stated purpose: to keep the dangerous effects of climate change at bay. Monkeying around with the Earth’s systems at a planetary scale obviously presents a number of unknown — and perhaps unknowable — dangers. How might other ecosystems be affected if we start injecting reflective particles into space? What would happen if the carbon dioxide we stored underground were to escape? What if the cure of engineering the climate is worse than the disease? But I think that it is too soon to get worked up about the risks posed by any individual technology. The vast majority of geoengineering ideas will never get off the drawing board. Right now, we should be asking more fundamental questions. Here is a project that elevates engineers and their political masters to the status of benevolent deities Geoengineering differs from other approaches to tackling climate change not in the technologies it seeks to deploy but in the assumptions it makes about how we relate to the natural world. Its essence is the idea that it is feasible to control the Earth’s climate. It is a philosophy, then — a philosophy that characterises the problem of climate change as something ‘solvable’ by engineering, rather than a social phenomenon emerging from politics and culture. Thinking about it in this way — as a set of assumptions about how to tackle climate change rather than a set of technologies — makes it easier to see why the ethical issues embedded in the concept are trickier than any scientific disputes about the side effects of this or that piece of machinery. Here is a project that elevates engineers and their political masters to the status of benevolent deities; a project that requires us to manage a suite of world-shaping technologies over the long haul. Do we have either the desire or the capacity to do that? As the late American climate scientist Stephen Schneider wrote in 2008: ‘Just imagine if we needed to do all this in 1900 and then the rest of 20th-century history unfolded as it actually did!’ In other words, world history is volatile enough even without the question of how to manage the global climate. Let’s think about how disputes might play out. What if I, as the ruler of a nation beginning to feel the adverse effects of climate change, unilaterally decided to start reflecting sunlight back into space? What if this had the effect of altering the rainfall in your nation? It is not difficult to see how quickly the Cold War logic of imagined threats and counter-threats would creep in to the geopolitics of climate management. The lessons of a film such as Dr Strangelove, or: How I Learned to Stop Worrying and Love the Bomb (1964) could well apply to meteorology as much as they do to nuclear physics. Even short of provoking military conflict, it is not obvious whose consent should be sought before a government, or even a wealthy individual, decides to embark on ‘Experiment Earth’. What’s more, if you believe, as I do, that the story of climate change is at root one of injustice, it’s even less clear how a high-tech geoengineering industry — inevitably directed by a consortium of wealthy nations — would do anything but exacerbate the division between those who are protected from climate change and those who must suffer its consequences. These political questions obscure a still deeper issue. If geoengineering involves remaking the global climate, might it also remake the connection between humans and nature? We have always existed in a strange kind of equilibrium with the natural world (whatever that is). Think of urban green spaces: ‘nature’ might be found in them, but we probably wouldn’t call the spaces themselves natural. On the other hand, plenty of human innovations have made their way into our idea of the ‘natural order’. The classic example is smallholder agriculture: what was once (albeit many thousands of years ago) considered the height of mastery over the elements is now an archetypal image of humans living in harmony with their environment. The lines between nature and artifice have always been blurred. They only grow more so as our grand technological narratives advance — as we unlock the code to our own genetic identity or build life from the ‘bottom up’ using nanoscale components. We can admit all of this and still insist that there are deep-rooted, widely shared intuitions about which elements of the world can be called ‘natural’. It is also clear that a broad range of people share a sense that certain aspects of the natural world lie — or should lie — beyond human influence. When scientists are accused of ‘playing God’, this criticism is as likely to come from an atheist as a religious person. If building life from the bottom up seems, to many, like overstepping the mark, this is not necessarily a theistic judgement. New technologies demand self-reflection about who we are and where we fit into the world. Geoengineering is only the latest idea to prompt that kind of soul-searching. Yet it is different from its predecessors in one important regard: its scope. Many scientists now believe that the industrial revolution marked the beginning of a new era defined by human dominance over the Earth’s ecosystems — the ‘anthropocene’. More than 20 years ago, the American environmentalist Bill McKibben wrote a book called The End of Nature (1989). He made a devastatingly simple argument: that the natural world could no longer be considered independent from human influence. Choices made by humans were shaping the fundamental nature of the planet itself, not just tinkering around the edges. As McKibben puts it: By the end of nature I do not mean the end of the world. The rain will still fall and the sun shine, though differently than before. When I say ‘nature’ I mean a certain set of human ideas about the world and our place in it.Geoengineering represents a very different set of ideas about the world and our place in it. A glance at the popular metaphors beginning to frame the debate leaves little doubt about just what kind of ideas they are. One popular rhetorical approach, for example, is to describe the planet as a patient, in need of treatment. It’s an image that Sir Paul Nurse, the President of the Royal Society, explored in a letter to The Guardian in September 2011: Geoengineering research can be considered analogous to pharmaceutical research. One would not take a medicine that had not been rigorously tested to make sure that it worked and was safe. But, if there was a risk of disease, one would research possible treatments and, once the effects were established, one would take the medicine if needed and appropriate.Our ‘sick planet’ is presented as in need of medicine, something that only we, the clever humans, can dispense. Through careful, responsible research, we can determine a cure — never mind the fact that our own consumption is the proximal cause of the disease. Earlier in his letter, Nurse suggests (plausibly) that there might come a time when we are forced to consider geoengineering. But the claim that we might need geoengineering because we simply can’t rein in our consumption implies a stark and somewhat disturbing truth: the natural world is widely considered more malleable than our own wishes and desires. We even have a name to capture this self-serving inflexibility: human nature. Long before climate change was even a concept, technocrats, entrepreneurs and ‘rainmakers’ were itching to get their hands on the levers that controlled the heavens Of course, there is a difference between saying that people can’t change their ways and the argument that we shouldn’t have to. In the US, right-wing climate change denial organisations such as the Heartland Institute have thrown their weight behind geoengineering as a ‘cost-effective’ solution to climate change, flipping neatly from denying that the problem exists to advocating a solution to it. As is often the case with climate change scepticism, rejection of the science appears to be a proxy for dislike of the policy implications. For those who prefer geoengineering to the ‘social engineering’ of behaviour change, controlling the climate seems like a better deal than being controlled themselves. Then again, the urge to control the weather runs deeply in human societies, as the American historian of science James Roger Fleming shows in his fascinating book Fixing the Sky: The Chequered History of Weather and Climate Control (2010). Long before climate change was even a concept, technocrats, entrepreneurs and ‘rainmakers’ were itching to get their hands on the levers that controlled the heavens. Indeed, Fleming’s analysis of medieval ‘hail archers’, hurricane canons and, more recently, cloud seeding, shows how illusory previous attempts to dominate nature have been, even on a small scale. Rarely has the enthusiasm for weather-engineering been matched by measurable, positive outcomes. There is an important lesson here for would-be geoengineers: if we can’t even manipulate the local weather successfully, what hope for controlling the global climate? But do geoengineers really want to seize the reins of the world’s atmosphere, or are they just regular guys who want to help combat climate change? Most of them, after all, claim that their interest in geoengineering is driven by necessity. And it is true that few would attempt such an outlandish enterprise unless all the other options had been exhausted. The problem is that all the other options have not been exhausted: people are simply exhausted from trying. So perhaps, despite the increasingly popular story that geoengineering is a necessary Plan B, the whole project only really makes sense as a kind of utopian scheme, pursued for its own sake. A reasonable question to ask in that case would be: ‘Whose utopia?’ There is in fact a small group of individuals — dubbed the ‘geoclique’ — who have led the call to intensify research on geoengineering. Contrary to some of the more excitable commentary on their motives, I do not believe that they are secretly promoting a political vision of the future. But their idea of a ‘pragmatic’ response to the inadequacy of current environmental policies is still utopian in character. After all, is trying to recreate a ‘better climate’ really so different from the political movements that sought to manipulate societal structures to make a ‘better world’? Moreover, if we accept our own overconsumption as an inevitability, we might slide into acceptance of the morally questionable mantra for solving the climate problem: we don’t have to change ourselves, because we can change nature. Can we take control of the winds, the rain and the sun, and model the climate to our liking? There are of course many in the environmental movement — and beyond — who oppose the logic of this approach. The green movement has always had at least one foot in the spiritual fields of Romanticism, with its reverence for the sanctity of nature. Nevertheless, some environmentalists have begun to embrace the Enlightenment logic of geoengineering as a Devil’s bargain. To ‘neo-environmentalists’ such as Stewart Brand or Mark Lynas, both critics of the green movement who view rapid technological change as the only feasible way to prevent catastrophic global warming, the prospect of geoengineering is no longer anathema. The more I reflect on what geoengineering is, and what it represents, the more it feels like the quest to control the climate is not really about climate change, or even about the climate at all. What it’s really about is the ancient, reciprocal loop between the idea of ‘nature’ and the question of where we — the humans — fit into it. Can we take control of the winds, the rain and the sun, and model the climate to our liking? For those who imagine that we can, geoengineering holds out the promise of an answer to climate change that sidesteps the inconvenience of societal reform. But for those who doubt our Earth-management credentials, geoengineering is worse than simply an ill-advised ‘quick fix’. It is the ultimate expression of a seemingly insatiable desire: to bend nature to the will of human nature, whatever the consequences. | Adam Corner | https://aeon.co//essays/modern-geoengineers-are-part-rainmaker-part-dr-strangelove |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.