id stringlengths 32 32 | url stringlengths 31 1.58k | title stringlengths 0 1.02k | contents stringlengths 92 1.17M |
|---|---|---|---|
0bf9fcb58858fb8fed24b9841d969b4b | https://www.forbes.com/sites/jonathantreiber/2021/03/17/is-the-grocery-industry-doomed-to-make-the-mistakes-of-the-past/?sh=15a75b883c72 | Is The Grocery Industry Doomed To Make The Mistakes Of The Past? | Is The Grocery Industry Doomed To Make The Mistakes Of The Past?
389451 07:Peapod product selector Bill Brimfield fills an order May 17, 2001 at a Peapod's warehouse ... [+] in Niles, IL. Peapod, an online grocery supermarket, plans to upgrade their web site with new technology to improve services, the company told shareholders.(Photo by Tim Boyle/Getty Images) Getty Images
If you were a grocery chain or CPG brand in 2020, you had to have felt pretty good about the way things were going. With in-person restaurant dining off the table and a greater focus on home cooking, grocery stores saw substantial profits last year. What began as a $16B profit spike as people began panic buying at the beginning of the pandemic eventually settled into a sustained $6B increase in revenue compared to pre-Covid months.
This massive increase in demand put strain on supply chains, altered promotion and discounting strategies (in short, there didn’t need to be much of any), and began pushing some of the conditions that would enable grocery chains to evolve their e-commerce offerings. 2021 and 2022 may be a different story entirely, however. As reopening measures begin to settle in, grocery chains and CPG brands are now faced with a crucial question: How do you manage long-term growth after a boom year?
There are a number of avenues that can affect how grocery chains move forward in the coming years; the digital revolution could transform the industry, or it could create the conditions for mass consolidation as smaller chains fail to keep pace with new demands, instead relying on old habits in promotions that could undo the gains made in 2020.
How Grocery Chains Are Responding to the Windfall of 2020
The sustained revenue growth made by the grocery industry in 2020 has enabled opportunities for investment that were previously nonexistent. While traditional in-store promotions are typically funded by brands, a lack of need to discount during the increased demand of the pandemic led to higher gross margins for retailers without the need to deduct discounts and recoup expenses from brands.
This influx of capital has made large-scale investments in a number of technologies available to retailers for the first time. Updating POS devices to encourage contactless payments, which could have a collateral impact on digital couponing is one of these emerging updates.
MORE FOR YOUWalmart And McDonald’s Break Up A 30-Year MarriageDressbarn’s Rebirth Is A Tale Of Digital Smarts And Perfect Pandemic TimingWhen China Rings The Bell In Xinjiang, Retail Crumbles
But perhaps most importantly, this revenue influx is allowing grocery chains to expand into the e-commerce delivery realm and invest in logistics to better serve online grocery shopping for in-store pickup. While many smaller grocery chains are offloading some of these delivery services to specialized companies like Instacart, it may merely be a stopgap until they can manage e-commerce platforms well enough to facilitate full-on grocery delivery in house.
Larger grocery chains like Kroger KR may have enough market share and capital to move enough into e-commerce to compete with larger, more established players like Amazon AMZN , Walmart WMT and Target TGT , but for now we’re still in early days. The growing pains of this transition could be a major gamble of investment for many retailers, particularly ones without the structural support to handle expansion into this highly competitive arena.
As this shift to e-commerce kicks off, grocery chains are also continuing to evolve loyalty and retention programs with an eye on a future that may see sales regress to pre-pandemic levels. Some of these loyalty programs, like Hy-Vee Plus, are looking towards a premium membership model like Amazon Prime or Costco where customers are rewarded for a monthly service fee that includes specialized discounts and free shipping offers.
At their core, both a play towards e-commerce and an expansion of loyalty programs are a means of customer retention; and retention will be critical to protect the gains made during 2020. As restaurants reopen, we will likely see some sort of pendulum shift back to in-person dining; or, worse-case scenario for grocers, pent-up demand for out-of-home dining could send the grocery industry into a negative growth trajectory near-term. If this scenario occurs, will we see grocers pivot to a more desperate strategy, particularly in pricing and promotions?
A Risk of Repeating the Past
The truth is, centuries-old, stable businesses with thin margins do not pivot quickly. While it needs to be commended how grocers executed such herculean strategies in e-commerce and curbside pick-up seemingly overnight, only the largest companies may have the resources to survive the whiplash that is about to ensue immediately following the pandemic.
If the strategies implemented for customer retention, either pivoting to e-commerce or enhanced loyalty programs aren’t successful, retailers and brands may fall back into old habits of deep, blanket discounts to try to encourage customers to keep coming back. Also, as e-commerce grocery retail is still in its infancy, getting customers in the door may also create the conditions for broad and unwieldy discount strategies.
Online promotions and discounts are overwhelmingly being funded by the retailers themselves in these early stages of e-commerce market development. Retailers currently have thin margins to absorb new customer discounts such as 20 percent off a first order or $50 off the first three orders. Further, retailers haven’t needed to run these types of offers in stores because location and convenience have driven new customer acquisition, and promotions are historically funded by brands. As online grocery matures, brands will undoubtedly fill the void to provide sufficient funds for grocers to offset direct promotional expenses, yet it will likely take years for that balance to materialize, leaving retailers holding the promotion “bag” in the near-term, when they are most financially vulnerable.
The problem is that e-commerce economics aren’t sustainable in their current form in the short-term. In these early days, market share is the name of the game, which means sales growth over profits as big up-front investments are being made in the hopes of a mid-term payoff. These large investments are being made based on rosy forecasts for sustained growth, however, and many retailers may face a reckoning if the payback period becomes longer due to demand shifts into other discretionary categories post-pandemic.
Unless retailers can convince brands to fund their excessive use of basket-wide discounts to woo and retain customers, grocers are ill-prepared for the risk-taking necessary to compete and sustain losses in the online grocery war. As losses mount and the sales forecasts become more dire in the short-term, many grocers may see no choice but to abandon e-commerce altogether. Worse yet, many smaller chains may be squeezed financially and seek partnerships, mergers or acquisitions to stave off financial ruin.
It’s a precarious time to be a grocery chain despite how great recent financial performance may look. Grocers, like many big retailers, need to invest 12 to 24 months out, yet consumer demand patterns can change overnight as soon as consumers feel comfortable eating out at restaurants again. Calculated risk-taking will be imperative to balance current investments and a tumultuous near-term outlook for the industry. There are simply too many consumer variables that can’t be controlled. Regardless of resources, retailers need to make the best risk-adjusted bets for their business. The boom times in grocery will likely come to an end, and only those with prudent risk-taking and financial management, along with a passionate approach to delivering customer value, will thrive through the next wave of the pandemic retail rollercoaster.
|
ec4a87458ab5380f6041fa664d2e138a | https://www.forbes.com/sites/jonathanwai/2021/02/25/can-college-visits-improve-college-aspirations-findings-from-a-randomized-experiment/ | Can College Visits Improve College Aspirations? Findings From A Randomized Experiment. | Can College Visits Improve College Aspirations? Findings From A Randomized Experiment.
For students who have parents that have gone to college or who grow up around college campuses, it is likely a given they will at least apply to college and very likely attend. However, for students who come from rural areas or places far away from a university campus there may be less of an expectation to attend college, or even a barrier due to unfamiliarity.
Getting more students onto college campuses has become an important policy concern, especially due to the economic benefits of attending college and the need to improve the social mobility of disadvantaged students. However, maybe first-generation students lack the “cultural capital” or cultural knowledge and social assets they need to effectively navigate the college application and attendance process. Perhaps not understanding what it is actually like to be on a college campus presents a nontrivial psychological barrier to students seeing themselves on campus in the future.
So what happens when you expose underrepresented students to college visits and allow them to set foot on and experience campus? That’s exactly what researchers Elise Swanson, Katherine Kopotic, Gema Zamarro, Jonathan N. Mills, Jay P. Greene, and Gary Ritter did in a recent paper just published in AERA Open titled “An Evaluation of the Educational Impact of College Campus Visits: A Randomized Experiment.”
The researchers recruited a total of 1,478 students in participating schools and randomized students within schools to either a treatment or control condition. The control group got an information packet about college, whereas the treatment condition got an information packet and visited a flagship university three times during the 8th grade.
These visits included a variety of on-campus activities. In brief, the first visit included a college information session and campus tour, the second visit focused on exposing students to different departments and degree paths available, and the third visit aimed to foster a sense of campus spirit by having students attend a university baseball game or compete in an on-campus scavenger hunt. More detail on the full intervention can be found in the paper.
The authors summarize their findings in this way: “We provide some of the first rigorous evidence that an intervention designed to introduce prospective students to the experience of college through field trips to a college campus can improve students’ knowledge about college (effect size of 0.14), self-efficacy related to attending and succeeding in college (effect size of 0.15), grit (effect size of 0.12), and academic diligence (proxied by item nonresponse; effect size of 0.21) above the effect of providing written information about college. We also find that campus visits may make students more likely to engage in conversations about college options and preparation with school personnel (effect size of 0.16). Our estimates represent small- to medium-sized effects within the literature on education interventions.”
MORE FOR YOUGoogle Earth's New Timelapse Feature Lets You See How Our Planet Has Changed In Four DecadesApril’s Pink Moon Is Also A Super Moon: How To Catch ItAsk Ethan: Were Mars And Venus Ever Living Planets?
Gema Zamarro notes that "The study suggests that visiting campus and interacting with current students, faculty, and university staff increased decided behaviors that help better prepare students for college."
The authors conclude: “To close opportunity gaps in postsecondary enrollment and degree completion, researchers should find scalable interventions that can be implemented across a variety of contexts. In this study, we explore the ability of a relatively low-cost intervention—three field trips to a local public university—to affect student attitudes and behaviors toward college. Our findings regarding the short-run impacts of this intervention suggest that this field-trip-based intervention could meaningfully affect student college decisions and preparation. This approach could be adopted by school districts interested in promoting college access for their students and could find support among universities interested in increasing their socioeconomic diversity or student population overall.”
Lead author Elise Swanson notes that the research team will follow all these students through high school into college to analyze the impact on long-term outcomes.
|
186d908a7f6ddb426594dc725b3c4691 | https://www.forbes.com/sites/jonathonkeats/2012/08/24/can-pussy-riot-redeem-political-art/ | Can Pussy Riot Redeem Political Art? | Can Pussy Riot Redeem Political Art?
Supporters of punk group Pussy Riot outside the Church of Christ the Saviour in central Moscow on... [+] August 15, 2012. (Image credit: AFP/Getty Images via @daylife)
In 1985, the Museum of Modern Art presented a 'comprehensive' exhibition of contemporary sculpture and painting in which more than 92 percent of the participants were men, provoking some female artists to stage a protest. Nobody paid them much attention. So the women donned gorilla masks and began to rally for gender equality anonymously, calling themselves the Guerrilla Girls, a group identity that made them collectively famous. From our present-day perspective, their agitprop theatrics would appear to be a dress rehearsal for the apotheosis of Pussy Riot. Yet for all the parallels – a conceptual debt Pussy Riot acknowledges – there is an essential difference. Whereas the Guerrilla Girls were picketing to be recognized for their art, Pussy Riot's "punk prayer" for the end of the Putin era, staged inside Moscow's Cathedral of Christ the Savior, is a brilliant work of political art in its own right.
Political art has a bad reputation, and there are two good reasons to be wary of it: political and artistic. Activists fairly argue that most political art fails to reach an adequate audience to inspire real change. And connoisseurs legitimately claim that most political art sacrifices aesthetics for rhetoric.
Free Pussy Riot - Demo Berlin (Photo credit: Grüne Bundestagsfraktion)
Pussy Riot has avoided the first pitfall by performing outside the safety zone of a museum or concert hall, spaces where self-selecting audiences tend to be sympathetic to artists' ideas before even entering the door, or able to frame fiery polemic as harmless entertainment. Invading Moscow's holiest church, members of Pussy Riot lodged their prayer in front of a clergy that had made the church into a political entity by ensuring Putin's ascent, and in front of the public that had the power to bring down Putin through their vote. In other words, performing inside the church was not merely provocative, but perfectly logical. The clergy had already desecrated the cathedral by making the Russian Orthodox Church into a secular power broker. The members of Pussy Riot were merely using the cathedral in accordance with this political role and, most ingeniously of all, appropriating prayer as a political act that really could work miracles by awakening an apathetic public.
Members of 'Pussy Riot' Nadezhda Tolokonnikova (L), Maria Alyokhina (R) and Yekaterina Samutsevich... [+] (C), sit behind bars during a court hearing in Moscow on July 30, 2012. (Image credit: AFP/Getty Images via @daylife)
How Pussy Riot performs is also important, seamlessly integrating rhetoric with aesthetics. Their masked anonymity reflects how Putin views the public – as an undifferentiated mass rather than as individuals with rights – while simultaneously demonstrating the revenge that a faceless public can take. Three women have been sent to prison, but Pussy Riot is not three members fewer. On the contrary, there are more masked rioters than before. The band's slogan – "We are all Pussy Riot" – sounds more plausible every day, and since they all look alike, their numbers appear infinite. Putin's vision of leadership is recast as an autocrat's worst nightmare.
Though often used in political protest, anonymity is uncommon in modern art, which is valued for uniqueness, a byproduct of artists' individuality. In that respect, Pussy Riot's performance bears less resemblance to contemporary work than to old church iconography, which proliferated as if by magic. Pussy Riot's output is likewise irrepressible. The political art of the future will be anonymous and emergent.
follow Jonathon Keats on Twitter
|
8a7c98b7f05670ee98d9330edda0507d | https://www.forbes.com/sites/jonathonkeats/2012/09/20/is-barry-mcgee-a-graffiti-art-sellout/ | Is Barry McGee A Graffiti Art Sellout? | Is Barry McGee A Graffiti Art Sellout?
Barry McGee: One More Thing, May 7–August 13, 2005 (installation view); Deitch Projects, 18 Wooster... [+] Street, New York; courtesy Deitch Archive, New York. Photo: Tom Powel Imaging
Invited to exhibit in one of New York's premier galleries a dozen years ago, Barry McGee responded by upending several trucks. He then bombed the wreckage with spraypaint and made it the centerpiece of an installation that confronted art collectors with an ersatz urban ghetto mimicking those he ordinarily vandalized on the downlow. One of the most memorable exhibitions in the 14-year history of Deitch Projects, the spectacle became the basis of McGee's nearly unparalleled popularity, culminating in his current midcareer retrospective at the Berkeley Art Museum. Through it all, he's struggled to reconcile mainstream appeal with his stated aim "to carry on pissing people off". It's been a losing battle. His furiously overturned trucks are now crowd-pleasers.
Installation view of Barry McGee, on view at the UC Berkeley Art Museum and Pacific Film Archive... [+] from August 24 through December 9, 2012. Photo: Leif Hedendal
McGee's notion of graffiti is completely conventional. Like many graffiti artists, he's motivated by issues of ownership. McGee is enraged that public spaces are visually dominated by the advertising messages of private enterprise, and determined to take back the streets through acts of vandalism. (Tagging is an assault on the status quo. The chaotic potential of graffiti is liberating both for the author and for society as a whole.) Where McGee differs from most graffiti artists – and artists generally – is in terms of sheer technical ability. He is a virtuoso draughtsman, whose caricatures of indigents are on par with those of modern masters such as George Grosz. Seen on the street – often executed in monochrome spraypaint – they instill empathy for people passed over by society and all too often overlooked by us individually.
Also unlike the work of most graffiti artists, McGee's drawings are as meaningful inside the gallery or museum as on the street. As illicit outdoor pieces, his caricatures disorientingly seduce passers-by with their unexpected beauty, counteracting the blind disdain that drives gentrification. Indoors, executed at much smaller scale as clusters of pen-and-ink drawings on scraps of paper and discarded liquor bottles, the caricatures vibrantly compress the urban environment in all its human complexity. As McGee explained in a 2008 interview, "I see a really good tag on a building, a man passed out in the middle of the street, a couple hugging, a cop arresting a panhandler. I’m interested in how all these things are happening in one block." In other words, this is not the sort of work likely to "carry on pissing people off". It isn't a rebuff, but an invitation.
Barry McGee: Untitled, 2005; acrylic on glass bottles, wire; dimensions variable; Lindemann... [+] Collection, Miami Beach. Photo: Colin M. Day
Tagging upturned trucks obviously lacks that fine-tipped nuance. Moreover, while raw vandalism can serve a political purpose on the street, transplanting it to a gallery or museum trivializes the gesture. The artificial mayhem doesn't menace art aficionados. On the contrary, the ghetto is tamed, transformed into gritty ambiance for a pile-up funhouse.
To his credit, McGee has taken care not to commodify vandalism. One reason the trucks appeal to him is that they can't be collected like paintings. But his good intentions hardly matter if they lead him to produce frivolous entertainment. Most of the world's graffiti doesn't belong in a museum because it derives meaning from the street. With his caricatures, McGee has found a subject and technique that thrive inside. Pleasing crowds with overturned trucks doesn't necessarily make him a sellout. But as an artist, he sells himself short.
follow Jonathon Keats on Twitter... and read his previous posts on artists including Banksy, Roy Lichtenstein, and Gustav Klimt
|
fe254903ad96290a8450d6c4cdf21e87 | https://www.forbes.com/sites/jonathonkeats/2012/10/18/how-an-art-forger-duped-the-nazis-by-counterfeiting-the-middle-ages-book-excerpt/ | How An Art Forger Duped The Nazis By Counterfeiting The Middle Ages [Book Excerpt] | How An Art Forger Duped The Nazis By Counterfeiting The Middle Ages [Book Excerpt]
Rathaus und Marienkirche, Lubeck. Credit: Wikipedia via Zemanta
In the early hours of Palm Sunday, March 29, 1942, the Hanseatic city of Lübeck burst into flames. The conflagration was no accident, but the consequence of a strategic decision the previous month in London. Because British air raids on German factories were missing the mark – seldom striking within five miles of target – Winston Churchill decided instead to wage war on the "morale of the enemy civil population" by carpet-bombing whole cities. Mainly built of dry old wood, and essentially undefended, ancient Lübeck was prime real estate for the Allies' first airborne auto-da-fé.
Two hundred and thirty-four planes dropped nearly three hundred tons of incendiary bombs, burning thousands of homes, as well as the merchant's quarter, the town hall, and several historic churches renowned throughout Europe for their Gothic architecture. The surprise attack spooked even Joseph Goebbels, the Nazi minister of propaganda, who confided in his diary that such raids had the potential to break the people's will. The Nationalsozialistische Volkswohlfahrt handed out oranges and apples, and the Luftwaffe retaliated with the so-called Baedeker Blitz, vowing to obliterate every town with a three-star rating in Baedeker's Guide to Great Britain.
Yet the Lübeck people were less impressed with fresh fruit – let alone cultural vengeance in Exeter and Bath – than with an unexpected revelation inside their own 13th century Marienkirche. Hot enough to melt the church bells, the Palm Sunday fire peeled five centuries of whitewash off the walls, exposing enormous Gothic frescoes painted when the building was erected. Dubbed "the miracle of Marienkirche", the discovery was sheltered under improvised roofing until the war ended and structural repairs could begin.
By the summer of 1948, the church was fully enclosed again, and a generous sum of 88,000 marks was apportioned for the celebrated restorer Dietrich Fey to conserve the precious murals. Even from the ground, some twenty-five yards below the sooty apostles, people could see that the frescoes' condition was delicate. Climbing the scaffolding, Fey's assistant Lothar Malskat confirmed their gravest concerns. Scarcely a shadow of the original paint remained, as he later recalled, "and even that turned to dust when I blew on it." Preserving the miracle of Marienkirche would require a miracle-worker.
Three years later, on the Lutheran church's seven hundredth anniversary, the West German Chancellor Konrad Adenauer stood in the nave amid ministers and dignitaries. "Das ist erhebend, meine Herren!" he declared – This is uplifting! – and he lifted up his arms to rows of ten-foot-tall Gothic saints. Radiant in red, green, and ocher, the Marienkirche miracle became a sort of solace for a whole downcast nation. As in all miracles, however, there was an element of the inexplicable. Though nobody cared to examine the recent past, a comparison between the murals and photographs taken in 1942 showed that some of the saints had moved, and that Mary Magdalene had lost her shoes.
Dietrich Fey did not so much earn his reputation as inherit it. His father, Professor Ernst Fey, was a respected art historian and restorer in Berlin, whose prestige was bolstered in the early '30s by the rise of the Nazi party and his talent for ingratiating himself with such self-styled connoisseurs as Luftwaffe commander-in-chief Hermann Göring. Accompanied by his son, Professor Fey restored paintings in churches throughout Silesia. He imparted to Dietrich his historical expertise, and the young man shared his aptitude for courting patrons. But Dietrich Fey lacked his father's touch with a brush. Securing important ecclesiastical commissions in the medieval Upper Silesian towns of Oppeln and Neisse, Fey und Sohn were in urgent need of an assistant by 1936, when a twenty-four-year-old house painter named Lothar Malskat came asking for work.
The haggard appearance that Malskat had acquired by living on park benches belied his prestigious background. In his native East Prussia, he'd studied art at the Kunstakademie Königsberg, where his professors praised him for his "extraordinary, almost uncanny versatility." Filled with optimism, he moved to Berlin, seeking fame, finding anonymity. Professor Fey put Malskat to work whitewashing his home. In return for the labor, Fey lent him books on ecclesiastical art. And gradually, in the old churches of Oppeln and Neisse, the professor taught Malskat his craft.
Then in 1937 Fey und Sohn brought Malskat to the ancient town of Schleswig, to assist them in a restoration of profound historical significance. The Schleswig cathedral, St. Petri-Dom, originated in the twelfth century as a Romanesque basilica, and gradually grew in physical stature as the province of Schleswig-Holstein increased in worldly power. St. Petri-Dom was the seat of bishops for half a millennium, and the burial place of King Frederick I of Denmark, whose tomb was carved in the 1550s by the renowned Flemish sculptor Cornelis Floris de Vriendt.
Each stage in the cathedral's development is preserved in distinctive artwork. The earliest images, on the walls of the cloisters, or schwahl, were first painted around 1300. With the passage of centuries, though, the biblical scenes were gradually degraded by dampness, to such an extent that in 1888 church authorities hired the painter August Olbers to mend the damage.
Praised at the turn of the century, by 1937 the repairs were deemed a calamity because Olbers had restored by repainting. He could scarcely be blamed. By the standards of his time, restoration meant renovation, a perfectly reasonable (if slightly naïve) notion: The purpose was to let people see again what once had been. In the 1930s, a rather different (though equally naïve) principle was in place, that the restoration must in no way impinge upon the original work. Writing about the conservation of ecclesiastic painting in 1926, the Cologne art historian Otto H. Förster articulated the new orthodoxy. "There must be no element of addition, completion or other conjectured reconstitution of any supposed original state," he wrote. Olbers was guilty of all three affronts.
Down in the schwahl, Malskat and the Feys set to work, attempting to reclaim history by scraping away the paint with which Olbers had tried to recapture the past. But subtracting what their predecessor had done – whether on account of Olbers' pigments or the Feys' incompetence – left almost none of the original paint. A nearly seven-hundred-year-old national treasure had vanished, and Ernst Fey was legally responsible for the disappearance.
Most likely Fey was the one to think of a fix. Unquestionably Malskat was the one who achieved it. Over the next several months, the erstwhile housepainter whitewashed the brick, discoloring his lime with pigment to give the walls an ancient tint. Onto this fresh surface he painted freehand his own version of the murals. Necessarily these were based on Olbers' nineteenth century restorations, reverse engineered to approximate the early medieval originals by reference to period examples in the professor's catalogues. Drawing his figures in earthtones, Malskat took up the spare 14th century style with preternatural ease and an utter lack of inhibition. He rendered his father as a prophet, and gave Christ the face of an old classmate. For the Virgin Mary, he had to look farther afield to find a suitable model, choosing a woman already widely worshipped: the Austrian movie star Hansi Knoteck. Ernst Fey then aged the contour drawings using a procedure he called zurückpatinieren – a fancy word for rubbing them with a brick.
* *
The critical response to the Schleswig restoration was ecstatic. Especially influential was the endorsement of Professor Alfred Stange, an eminent art historian at the University of Bonn who'd previously instructed the Nazi elite at the Reichsführerschule and remained a close confidant of Alfred Rosenberg, the chief party ideologue. Praising Fey for orchestrating a restoration "as restrained as it was careful," Stange lauded the reconditioned murals as "the last, deepest, final word in German art." And so they became in 1940, when Reichsführer-SS Heinrich Himmler ordered that Stange's illustrated book, Der Schleswiger Dom und seine Wandmalereien, be distributed to all German schools.
Within the Nazi context, the schwahl's educational value was significant. According to Stange, the paintings represented "an excellent demonstration of the ties, permanent because they spring from nationality, which bind Schleswig to the Saxonian-Westphalian area and its art." In other words, the murals could be used to promote Third Reich nationalism. Perhaps more important, the figures conformed to appropriate racial stereotypes, confirming the purity of the German bloodline. As Malskat later put it in an interview with the Hamburger Abendblatt, "I had to paint the apostles as long-headed Vikings because one did not want Eastern round-heads."
The greatest zeal, though, was reserved for the so-called Schleswiger Truthähnbilder, eight paintings of turkeys embellishing a depiction of the Massacre of the Innocents. The turkeys were first pointed out by an independent historian named Freerk Haye Schirrmann-Hamkens in a 1938 article for a local newspaper. Schirrmann-Hamkens brought up the turkeys because their appearance in a mural allegedly painted circa 1300 was surprising: Turkeys are New World birds believed to have first been introduced to Europe by the Spaniards in the 1550s. Of course Schirrmann-Hamkens couldn't question the authenticity of paintings that were under Himmler's protection. Instead he used the paintings to question history. By his reckoning, the Schleswiger Truthähnbilder showed that Vikings had discovered America – and brought back gobblers to the Fatherland – centuries before Christopher Columbus was conceived.
Schirrmann-Hamkens' theory was bound to be popular. Already the Nazis were championing a 1925 tome by a Danish librarian arguing that the German explorer Didrik Pining had reached America in 1473. A 13th century Nordic conquest of America that brought turkeys to Schleswig was even better, serving even more firmly to establish the Teutonic supremacy of the German race (not to mention Germans' Viking pedigree). "The portrayals are based on a high degree of personal observation," Stange wrote of the turkeys in his 1940 essay. ("They are not, as so often, borrowed from reference books," he added, lest anybody think that the Vikings had merely raided a library.) Turkeys in early medieval Germany became a part of the Nazi orthodoxy, and were put to work on behalf of the Third Reich propaganda machine. "Aryan seafarers went to American long before Columbus did," a guidebook to St. Petri-Dom advised tourists. "Incidentally, Columbus is the descendant of Spanish Jews from Barcelona."
Dissent came from an unexpected source. Nearly eighty years old when Malskat's restoration was complete, August Olbers emerged from retirement to assert that the Schleswiger Truthähnbilder weren't proof of circumnavigation by Vikings because he himself had painted them in the late 1880s. Olbers explained that he had not intended to fool anyone. Unable to discern what had originally filled the wall space beneath the Massacre of the Innocents, and loath to leave it empty, he'd come up with a motif of foxes and turkeys to symbolize the guile and gluttony of the murderous King Herod.
Malskat had seen Olbers' anachronistic turkeys and, untutored in ornithology, assumed that they belonged to the original medieval composition. He'd so liked the look of them that he'd doubled their number to eight. The zeitgeist of the Third Reich covered his mistake. In fact, even Olbers was unable to discredit the Fey und Sohn restoration, and debunk the Viking legend. His recollections were disparaged as senile delusions, his memory challenged by experts who dutifully quashed their qualms about Fey's restoration. They endorsed the St. Petri-Dom forgeries as a sort of pious fraud – a splinter of the Holy Cross or a Shroud of Turin for the National Socialist state religion. Arcane art history could be debated, but the murals had become politically sacrosanct, and professing faith in their authenticity was tantamount to believing in the Fatherland.
[Excerpted from Forged: Why Fakes Are The Great Art Of Our Age, by Jonathon Keats, forthcoming from Oxford University Press in January. Part II of the Malskat saga can be read here.]
Follow Jonathon Keats on Twitter... and reserve a copy of Forged: Why Fakes Are The Great Art Of Our Age on Amazon.
|
b1b8572fd34d97c17ef3a38a6db17963 | https://www.forbes.com/sites/jonathonkeats/2012/11/01/why-cai-guo-qiang-is-good-for-china-and-bad-for-art/ | Why Cai Guo-Qiang Is Good For China And Bad For Art | Why Cai Guo-Qiang Is Good For China And Bad For Art
'Freja: Explosion Event For Faurschou Foundation', realized on-site outside Faurschou Foundation,... [+] Copenhagen, Denmark, September 6 2012. Photo by Wen-You Cai, courtesy Faurschou Foundation Cai Studio.
Cai Guo-Qiang started making art for extraterrestrials in 1989. Five months after the Tiananmen Square massacre, he built a shanty similar to those made by Tiananmen protestors, loaded it with gunpowder and lit a fuse. The nine-second-long explosion was his artwork: a pyrotechnic spectacle visible from space that for once wasn't a byproduct of violence. If any aliens saw it, they didn't say, but it made little difference. His Project for Extraterrestrials No. 1 had plenty of resonance for terrestrial audiences confronted with the futility of improving the intergalactic image of the bloody human race.
Cai's career – currently surveyed at the Faurschou Foundation in Copenhagen – was built on that gesture together with several dozen more ambitious projects for extraterrestrials. To orchestrate his 1993 Project to Extend the Great Wall of China by 10,000 Meters, for example, he and some assistants suspended approximately six miles of fuse in the Gobi desert, beginning where the Great Wall ended. For approximately fifteen minutes, a 10,000-meter barricade of fire was sustained with 1300 pounds of gunpowder. By 2008, Cai's command of pyrotechnics had become so sophisticated that he was commissioned by the Chinese government to create fireworks displays for the opening and closing ceremonies of the Beijing Olympics. No extraterrestrials were invited, which was probably for the best, since they might have wondered what the artist who'd introduced them to Tiananmen was doing engineering the most ostentatious global propaganda display in the history of the People's Republic.
Like many Chinese artists, Cai is pragmatic. Even Ai Weiwei, arguably the world's most prominent Chinese dissident, has had his share of patronage from the Chinese state. (He was the artistic consultant on the Bird's Nest Stadium in which Cai's Olympic firework displays were presented.) Yet unlike Ai's pragmatism – which is an honorable if perilous political tactic – Cai's pragmatism has been purely professional. It's the kind of pragmatism that the Chinese government can fully abide, and maybe even relish: When Cai shows work in Western venues that obliquely criticizes the PRC, he only burnishes China's global image as a tolerant country unjustly vilified by critics like Ai.
'A Clan of Boats'. Faurschou Foundation, 2012. Photo by Anders Sune Berg, © Faurschou Foundation.
Perhaps it's for the best that Cai's work has become increasingly innocuous as his fame has increased. As a prelude to his Faurschou Foundation exhibit, for instance, he fired thousands of small rockets from a traditional Danish boat. If desperate, you could extract meaning from this action – as the curators have done by calling it "a conversation between the seen and unseen worlds" – or you can simply be seduced by the sculptural beauty of the scorched wooden hull now suspended from the ceiling of an immaculate gallery.
Either way, Cai's pragmatism will have paid off for him. His career continues to advance. (This year he also had big shows at the Museum of Contemporary Art in Los Angeles and the Zhejiang Art Museum in Hangzhou.) He must be proud. He's getting exactly what he wants. But what does it say about the art circuit that uncritically embraces his work? Rest assured, he's lost whatever credibility he once had with aliens.
Follow Jonathon Keats on Twitter… and read an excerpt from his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Forbes.com
|
b10bede179908692b51b9f998ac86dca | https://www.forbes.com/sites/jonathonkeats/2013/02/07/porn-was-the-prehistoric-ancestor-of-art-lets-revive-the-past/ | Porn Was the Prehistoric Ancestor Of Art. Let's Revive The Past. | Porn Was the Prehistoric Ancestor Of Art. Let's Revive The Past.
Modelled figure of a mature woman from Dolni Vestonice, the oldest ceramic figure in the world. On... [+] loan to the the British Museum from Moravian Museum, Anthropos Institute.
In some countries it would be grounds for arrest. Some 35,000 years ago, an Ice Age sculptor carved a mammoth tusk into a naked female figure with impossibly large breasts. Thighs and buttocks were also exaggerated, and her legs were salaciously spread. Archeologists dubbed her the Venus of Hohle Fels after the German cave where she was found. The scientific journal Nature was less euphemistic, dubbing the diminutive object a "prehistoric pin-up".
Though the Venus of Hohle Fels is the oldest figurative sculpture now known, her explicit sexuality is by no means unique. Earlier reliefs carved in the Abri Castanet rock shelter 37,000 years ago are believed by their discoverers to be abstract vulvas. And a veritable harem of younger "Venus" statues dating from 30,000 to 20,000 years ago – many of which are included in an important new exhibit of ice-age art at the British Museum – reveal that libido played a decisive role in artistic practice through much of human prehistory.
The eroticism of these figurines has attracted attention – and brought them attention – ever since archeologists started digging them out of the ground. In the '40s, for instance, the paleontologist Karel Absolon noted the "diluvial plastic pornography" of an ivory figure he excavated from Dolní Vestonice in the current Czech Republic. The statuette is a simple rod incongruously sprouting pendulous breasts. (As Absolon wryly noted, "the artist neglected all that did not interest him".) Another statue, the Venus of Lespugue, caught the eye of Pablo Picasso, He owned two copies, and, according to his friend Brassaï, worshipped this sculpture as "the very first goddess of fecundity."
As might be expected, the focus on sexuality has spawned a backlash. In the catalogue for the British Museum show, the curator Jill Cook calls attention to "the physical diversity of the figures", who run the gamut from lithe to obese, and represent all stages of life from maidenhood through pregnancy and menopause. More stridently, University of Victoria archeologist April Nowell has objected to the presentation of ice age Venuses as prehistoric pin-ups because such perspectives "legitimise and naturalise contemporary western values and behaviours by tracing them back to the 'mist of prehistory'".
The question is whether these prehistoric statues are erotic only by contemporary western standards, or whether their eroticism is timelessly universal. Given the sheer diversity of physiques, the latter seems more plausible. Quite frankly, most of these Venuses have body types well outside the range you'd see in Playboy, yet their sensuality is difficult to miss. In recent times, it's become fashionable to say that our worldview is culturally determined. These Venus statues suggest that at least some of our perceptions are purely biological.
And in that respect, the Venus of Hohle Fels and her daughters may help us to understand the origin of art without recourse to archeological time travel. Their sexuality is primary. They're stone and ivory pornography.
Which makes sense, if you consider the effort involved, and the expense. To make anything in a period as hostile as the ice age, you'd need a strong justification, as there obviously was in the manufacture of stone implements. Like the preparation of food, fertility was essential for survival. Art was a tool.
Torso of a young woman in her prime, from Petrekovice. Red ochre. Archaeological Institute of the... [+] Czech Academy of Sciences, Brno.
Today art remains a tool, though its usefulness had been blunted because its function isn't the stuff of cultured conversation. We should look to prehistoric art for inspiration. With the world population verging on seven billion, fecundity is the last thing we need. Yet in terms of immediacy, art ought to be more like pornography.
Follow Jonathon Keats on Twitter… and purchase a copy of his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
|
7997caff9c27d8639e88e2000a5a905f | https://www.forbes.com/sites/jonathonkeats/2013/03/21/that-mine-detonator-at-moma-just-think-sculpture/ | That Mine Detonator At MoMA? Just Think Sculpture. | That Mine Detonator At MoMA? Just Think Sculpture.
The Afghan countryside is riddled with ten million landmines. Since their locations are unknown, there's no intelligent way to make the landscape safe. So the designer Massoud Hassani has invented a machine that meanders across the terrain by chance, propelled by the winds, detonating bombs as it tumbles over them, mapping their whereabouts by GPS. Built with forty dollars worth of recycled bamboo and plastic, his "Mine Kafon" is beautiful in concept. And it's no coincidence that the prototype – currently exhibited at the Museum of Modern Art – is a beautiful object.
That's because the design specifications call for parsimonious engineering. Hassani's machine literally runs on simplicity. The form has a quality of inevitability, a symmetry that's aesthetically pleasing for much the same reason we find a starfish or a mathematical proof to be beautiful.
The Mine Kafon would not look out of place in MoMA's painting and sculpture galleries, perhaps sharing space with the minimalist constructions of Donald Judd and Carl Andre. (Though not made to be functional, their works were designed to maximize visual impact using minimal materials.) Many painters and sculptors would object, but every artist must also be a designer to the extent that a painting or sculpture affects visual experience.
Minimalism is just one of many strategies, neither better nor worse than most others, and Hassani's invention is relevant to all of them. Art is good to the extent that it's parsimonious. No matter the complexity of a work, or its style, each element must be essential, and everything that the work requires – all that is to be expressed by it – must be intrinsic to it. Anything less will fizzle.
Image Caption: Massoud Hassani. Mind Kafon, 2011. Bamboo and biodegradable plastics. 221 x 21 x 221 cm. Gift of the Contemporary Arts Council of the Museum of Modern Art. Photo by Rene van der Hulst.
Follow Jonathon Keats on Twitter… and purchase a copy of his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
|
c19da157465dd9e23aa5e95eb72c8329 | https://www.forbes.com/sites/jonathonkeats/2013/07/25/what-do-the-california-prison-strike-and-abu-ghraib-have-to-do-with-you-find-out-at-the-california-photography-museum/ | Stark Images Of Abu Ghraib Part Of New California Photography Museum Exhibition On Imprisonment | Stark Images Of Abu Ghraib Part Of New California Photography Museum Exhibition On Imprisonment
Photo Caption: Richard Ross, Segregation Cells, Camp Remembrance, New Abu Ghraib, Abu Ghraib, Iraq,... [+] 2005. Courtesy of the artist.
This is no Montessori school. Seven years ago, the American photographer Richard Ross visited the detention centers of Abu Ghraib and Guantanamo, as well as domestic jails ranging from Pelican Bay State Prison to Angola State Penitentiary. His images were shocking, and not only because they showed realities that few U.S. citizens ever witness. They were remarkable also for the fact that Ross was able to document them. "I don't get why they allowed me to take those pictures," Ross said in a 2007 interview, referring to his images of Abu Ghraib segregation cells barely larger than the men they held. The fact that the military let him shoot shows the degree to which soldiers considered segregation cells unexceptional.
Seven of Ross's photographs are included in a timely exhibition about imprisonment at the California Museum of Photography, which happens to coincide with one of the biggest prison hunger strikes in recent memory. Other pictures at the museum include Ross's documentation of a starkly-lit interview room at the Secret Service Headquarters in Los Angeles and a well-worn booking bench at the Los Angeles Police Department. These images need to be seen. To understand what the hunger strike is about, and to make informed decisions on future ballots, those who have never been imprisoned must confront the conditions of incarceration as fully as possible.
Yet Ross didn't only include pictures of law enforcement and military detention in this series. He titled it Architecture of Authority, and also showed border crossings and confessionals, bank lobbies and, yes, a Montessori school. (The photo shows the "Montessori circle" in the middle of the schoolroom where preschoolers come together at the end of the day: a sort of geometrically-mediated act of class consensus.)
In this broader view – sadly excluded from the Museum of Photography show – we can start to see how Pelican State and Abu Ghraib become normalized as accepted expedients for keeping our world safe. We can also parse the architectural vocabulary of control, from a group circle on the floor to a solitary confinement cell.
Follow Jonathon Keats on Twitter… and purchase a copy of his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
|
3792581beb5bc7659b99cbd64349b83b | https://www.forbes.com/sites/jonathonkeats/2013/08/26/photographer-julia-margaret-cameron-victorian-era-godmother-of-instagram-comes-to-the-metropolitan/ | Photographer Julia Margaret Cameron -- Victorian-Era Godmother Of Instagram -- Comes To The Metropolitan | Photographer Julia Margaret Cameron -- Victorian-Era Godmother Of Instagram -- Comes To The Metropolitan
"In these pictures all that is good about photography has been neglected, and the shortcomings of the art are prominently exhibited," reported The Photographic Journal in an 1864 article about Julia Margaret Cameron's "out-of-focus portraits" of the leading poets and scientists of her age. "We are sorry to have to speak thus severely on the works of a lady, but we feel compelled to do so in the interest of the art." The Photographic Journal had a point. Rather than trying to suppress the chemical and physical limitations of 19th century photography, Cameron flaunted them.
The result was a body of work that retains uncanny vibrancy even as many of the celebrated subjects have faded into obscurity. For instance the great Victorian scientist (and co-inventor of photography) John Herschel may be more recognizable today for Cameron's portraits than for his once-eminent name. More than just depicting their subject, her pictures retain the immediacy of the moment when they were made, revealing Herschel's living relationship with both camera and photographer. Gazing at his portrait – now on view in a fine new Cameron exhibit at the Met – you feel as if you were in his presence.
Since Cameron, photography has come a long way technically. In fact it's come so far that apps including Instagram and Hipstamatic have been designed to introduce flaws – from lens flare to oversaturated colors – into otherwise immaculate digital pictures. These atmospheric effects would not have been condoned by The Photographic Journal. On the contrary, they are the 21st century vindication of Cameron's vision.
At least superficially. At a deeper level, they are the opposite of Cameron's ideal. Cameron exploited qualities intrinsic to the medium as she found it, deriving artistic expression from the entirety of the technology. She filtered almost nothing. Instagram filters nearly everything – and does so uniformly for everybody.
Yet digital photographers – especially those who use technically-limited smartphones – can still learn from Cameron by finding where their equipment fails their expectations. Those are realms of discovery. Photography begins when the camera reveals what would never otherwise be seen.
Follow Jonathon Keats on Twitter… and purchase a copy of his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
[Image Caption: Julia Margaret Cameron (English, 1815–1879). Sir John Herschel, April 1867. Albumen silver print from glass negative. The Rubel Collection, Promised Gift of William Rubel. The Metropolitan Museum of Art (L.1997.84.6)]
|
b7f144c074c80a7a64950f7a95c6b8ff | https://www.forbes.com/sites/jonathonkeats/2013/11/05/the-naughtiest-picture-of-1913-nude-descending-a-staircase-returns-to-new-york-city/ | The Naughtiest Picture Of 1913 'Nude Descending A Staircase' Returns To New York City | The Naughtiest Picture Of 1913 'Nude Descending A Staircase' Returns To New York City
Image Caption: Marcel Duchamp (French, 1887-1968), Nude Descending a Staircase (No. 2), 1912. Oil on... [+] canvas, 57 7/8 x 35 1/8 in. Philadelphia Museum of Art, The Louise and Walter Arensberg Collection, 1950, 1950-134-59. © 2013 Artists Rights Society (ARS), New York / ADAGP, Paris / Succession Marcel Duchamp
"We will show New York something they never dreamed of," the artist Walt Kuhn wrote his wife in October of 1912. He was writing from France, where he'd just "landed a batch of Cezanne and Gauguin", paintings he intended to include in an exhibition he was co-organizing at the 69th Regiment Armory on Lexington and 25th. The show opened four months later, and, just as Kuhn expected, New Yorkers were scandalized.
What he and his fellow organizers didn't foresee was the focal point of outrage at the Armory Show, which was neither Cezanne nor Gauguin, nor was it Pablo Picasso. Of the fourteen hundred artworks in the 30,000 square foot drill hall, the one that attracted the majority of attention was a three-by-five foot painting by an obscure Frenchman named Marcel Duchamp. The painting was titled Nude Descending A Staircase (No. 2). As a new book about the show lavishly documents, the public crowded around the picture, and the media competed to feed the frenzy. The New York Times compared it to "an explosion in a shingle factory", and American Art News called it "the conundrum of the season", offering a hefty ten dollar reward "to find the lady".
The painting – which returns to Manhattan as part of The Armory Show At 100 at the New-York Historical Society – became a synecdoche for both the exhibition and the artist. Even Duchamp's later Readymades – such as the urinal he submitted to the Society of Independent Artists under the name Fountain in 1917 – were overshadowed for many decades by his Nude. Toward the end of his life, he summed up the situation when he told an interviewer that "the painting was known, but I wasn't."
What made Nude Descending A Staircase so provocative? Surely anybody baffled by Duchamp's Nude would have been equally bewildered by Cubist paintings like Francis Picabia's Dances at the Spring, and there were plenty of nudes more physically explicit than Duchamp's in the exhibit. But Duchamp ingeniously combined an enticingly explicit title with a frustratingly baffling image.
His painting applied the cognitive basis of erotica to high art: arousing interest with what cannot be seen. The method is at the root of his Readymades, and his Readymades are at the core of 20th and 21st century art. The conundrum of the season is still rewarding – and still scandalizing – one hundred years after the Armory Show closed.
Follow Jonathon Keats on Twitter… and purchase a copy of his new book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
|
5fad6c795ff96aac2406d9f8b034de85 | https://www.forbes.com/sites/jonathonkeats/2014/03/27/at-this-massachusetts-art-museum-some-forgeries-are-faker-than-others-but-better-not-trust-your-eyes/ | At This Massachusetts Art Museum, Some Forgeries Are Faker Than Others (But Better Not Trust Your Eyes) | At This Massachusetts Art Museum, Some Forgeries Are Faker Than Others (But Better Not Trust Your Eyes)
In 1967, the aristocratic Hungarian art collector Elmyr de Hory told his neighbor a secret. All the modern masters on the walls of his Ibiza mansion were fakes; he was the real painter. Others that he'd made, including artwork attributed to Henri Matisse and Pablo Picasso, had been sold to museums around the world. He'd lost his family fortune with the Second World War. Forgery was how de Hory made his living.
For his neighbor, a struggling novelist named Clifford Irving, the tale was irresistible. With de Hory's collaboration, he wrote a scandalous book. Fake! made de Hory a celebrity, the subject of countless news articles and Orson Welles' final film. It also launched a second career for both men: Irving became a forger, faking Howard Hughes' autobiography, and de Hory became an artist under his own name, selling work in the style of Matisse and Picasso, but signed Elmyr.
Elmyr de Hory (Hungary, 1906-1976), Odalisque, 1974, oil on canvas, in the style of Henri Matisse... [+] (French, 1869-1954). Collection of Mark Forgy. Photo by Robert Fogt.
The story repeats three decades later (albeit without the Clifford Irving character) when a convicted British art forger named John Myatt began making fakes of his fakes (in the styles of Claude Monet and Vincent van Gogh) under his own signature. Similar strategies have been followed by numerous other confessed forgers including the Englishman Eric Hebborn. Which may provoke you to ask: What are authentic works in the style of fakes worth as art?
An important exhibition of art forgeries curated by Colette Loll, currently at the D'Amour Museum of Fine Arts, naturally provokes this question. Intent to Deceive includes the forgeries of some of these great fakers together with works painted as their own. Comparisons are revealing.
As a matter of visual impact, both categories are approximately equivalent. De Hory's Picassos and Myatt's Monets remain aesthetically consistent before and after these men confessed. Creative permutations on the artists' signature style, the paintings serve the beneficial role – as I argued in my book Forged – of posthumously increasing Picasso and Monet's output. (Perhaps Picasso didn't need it.)
On the other hand, there is a profound conceptual difference between paintings made before and after the forgers confessed. The fraudulent fakes of these forgers were remarkably subversive, exploiting the unjustified assumptions of the society in which they were operating (such as the infallibility of institutional authority). When the forgeries were discovered to be fake, those unjustified assumptions were called into question. The most important function of art in our time – as a psychological and social provocation – was spectacularly served by these counterfeits. Lacking the context of fraudulence, the later works of Myatt, Hebborn and de Hory are conceptually flat.
But never trust a master faker. Though John Myatt appears to have gone completely straight, Eric Hebborn managed to confound the art world even more totally after he started selling Hebborns in the '80s, because they advertised his consummate skill as rumors arose that he was still faking. And de Hory? Don't believe the bit about his aristocratic heritage. His true life story is still getting sorted out, and there are rumors that a factory in China has been faking signed Elmyrs.
Follow me on Twitter… and find my latest book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon.
|
11af298c92cc781dc34fa43198291a4d | https://www.forbes.com/sites/jonathonkeats/2014/12/29/this-stanford-exhibit-shows-how-robert-rauschenberg-boosted-nasa-space-exploration-with-art/ | This Stanford Exhibit Shows How Robert Rauschenberg Boosted NASA Space Exploration... With Art | This Stanford Exhibit Shows How Robert Rauschenberg Boosted NASA Space Exploration... With Art
On November 19, 1969, Robert Rauschenberg landed his first art exhibit on the moon. Showing with him was a select group of artistic adventurers including Andy Warhol and Claes Oldenburg. Their exhibition, which still remains on the lunar surface, was minuscule: All the work was etched into a ceramic chip the size of a thumbnail. The show was also totally unauthorized: When NASA declined to launch their Moon Museum, a Grumman engineer secretly stuck the wafer to the landing gear of the Apollo XII lunar module. Even the astronauts didn't know about it.
The Apollo XII landing coincided with another Rauschenberg exhibition: the presentation of his Stoned Moon suite of lithographs at the Castelli Gallery in New York. Currently the focus of a show at Stanford University's Cantor Arts Center, Stoned Moon was the opposite of the Moon Museum. Some of the lithos were vast – more than seven feet tall – the largest ever printed at the time. Moreover they were the direct result of a NASA collaboration with Rauschenberg, who'd been expressly invited to Cape Kennedy in anticipation of the Apollo XI launch.
Robert Rauschenberg (1925–2008), Sky Garden 1969, from the Stoned Moon Series. Lithograph. Lent by... [+] Stephen Dull. © Robert Rauschenberg Foundation / licensed by VAGA, New York, NY.
Rauschenberg's collaboration was part of a broader effort by NASA to engage the public in space through the vehicle of art. (Another collaborator was Norman Rockwell, who was loaned a space suit so that he could make a photorealistic painting of astronauts.) With total freedom to see the space program from within, Rauschenberg thoroughly explored the NASA facilities. He visited the Vehicle Assembly Building, where the Apollo VII rocket was constructed. "Inside larger than all outsides," he noted. And he was seated in the Cape Kennedy grandstand for the countdown. "The incredibly bright lights, the moon coming up, seeing the rocket turn into pure ice, its stripes and USA markings disappearing – and all you could hear were frogs and alligators," he observed.
The Stoned Moon lithographs are variations on that theme. Layering countless found and borrowed images, Rauschenberg juxtaposed the technical glory of space flight with the mundane Florida setting of Cape Kennedy. The results are visually dazzling and technically awe-inspiring. Yet seen from a distance of several decades, the images seem disconcertingly bombastic and conceptually shallow, especially when compared with the manic eclecticism of his earlier Combines. Driven by his enthusiasm for space exploration, Rauschenberg seems to have put Apollo-era propaganda ahead of artistic adventurousness.
That's another way in which Stoned Moon is the opposite of Rauschenberg's addition to the Moon Museum. Perhaps because the Moon Museum was a sort of lunar mission in its own right, Rauschenberg was driven to make pioneering artwork for it. Accordingly, his contribution was the simplest of gestures, art at its most elementary. He drew a solitary straight line: a new beginning for art and mankind.
Follow me on Twitter, find my latest book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon, and read a Next City article about my ongoing Century Camera art project.
|
fc77991fee20cedc50dac5f79d931e92 | https://www.forbes.com/sites/jonathonkeats/2015/10/21/land-art-to-overcome-an-earthquake-after-35-years-see-alberto-burri-in-sicily-and-at-the-gugg/ | Land Art To Overcome An Earthquake? Alberto Burri Triumphs At The Guggenheim -- And In Sicily | Land Art To Overcome An Earthquake? Alberto Burri Triumphs At The Guggenheim -- And In Sicily
The dry lake beds of Death Valley resemble the craquelure of old paintings. In both cases, a network of cracks crinkles the crust as underlayers progressively dry out. For most painters, lake beds are a sort of momento mori, a natural reminder of their own artistic mortality. But when the Italian artist Alberto Burri visited the dry lakes of Death Valley in the 1960s, his mind didn't turn to conservation. Far from it. he saw a process he wished to emulate.
Combustione legno (Wood Combustion), 1957. Wood veneer, paper, combustion, acrylic, and Vinavil on... [+] canvas, 149.5 x 99 cm. Private collection, courtesy Galleria dello Scudo, Verona. © Fondazione Palazzo Albizzini Collezione Burri, Città di Castello/2015 Artist Rights Society (ARS), New York/SIAE, Rome
Burri was already interested in ways of making paintings without painting them, especially when the compositions emerged from the inherent qualities of materials such as burlap, or resulted from their breakage and burning. An important new retrospective at the Guggenheim – the first major US exhibition of Burri's work in four decades – shows the development of his "unpainted paintings" and his fascination with destructive processes as modes of creation.
The Cretti – Burri's craquelure paintings of the 1970s – were an outcome of this trajectory. They embodied what he referred to as "the energy of the surface". By mixing pigment with resin and polyvinyl acetate, he set up the conditions for the paintings to self-destruct like an old Rembrandt or a desiccated desert lake.
Grande cretto nero (Large Black Cretto), 1977. Acrylic and PVA on Celotex, 149.5 x 249.5 cm. Centre... [+] Pompidou, Paris, Musée national d’art moderne/Centre de création industrielle, Gift of the artist, 1978. ©Fondazione Palazzo Albizzini Collezione Burri, Città di Castello/2015 Artists Rights Society (ARS), New York/SIAE, Rome. Photo: ©CNAC/MNAM/Dist. RMN-Grand Palais/Art Resource, New York.
At around the same time that Burri was painting his Cretti, the Sicilian city of Gibellina was struggling to recover from a catastrophic earthquake. The city was rebuilt seven miles from the ruins, and Burri was invited by the mayor "to translate for the present generation and for future generations the tragedy, the struggle, the hope and the faith" of the people.
The old city was rubble, barely recognizable save for a network of crevices where once there'd been streets. Burri transformed the ruins into a cretto of concrete.
It was an epic feat, decades of halting work. Gradually, as funding permitted, the ruins were encased in white cement so that only the street plan was preserved. Old Gibellina became a ghost town more ghostly than any abandoned village in Death Valley.
And yet the Grande Cretto – which has finally been completed earlier this month, twenty years after Burri's death – stands against the tendency of abandoned cities to be forgotten. What was broken by the quake is fixed in time. Burri's final cretto is a momento mori because it preserves the energy of destruction.
Follow me on Twitter, find my latest book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon, and read this Fast Company article about my new diplomatic geoengineering sci/art project, launching this Thursday at Modernism Gallery, San Francisco.
|
e6b207043f795fd9a9531a878db1cf09 | https://www.forbes.com/sites/jonathonkeats/2015/11/24/video-chat-in-1964-the-new-york-historical-society-reveals-silicon-valleys-east-coast-predecessor/ | Can Silicon Alley Ever Beat Silicon Valley? A New Exhibit Shows How New York Once Ruled Technology | Can Silicon Alley Ever Beat Silicon Valley? A New Exhibit Shows How New York Once Ruled Technology
At the 1964 World's Fair, Bell Labs sought to connect people with the future. Beginning with the First Lady and the mayor of New York City, the company invited people to try out the Picturephone, a telephone with a video feed. New Yorkers were intrigued – it was the first time most had appeared on a television screen – but what really caught their attention was a typewriter bar hosted by IBM.
Typewriters were nothing new in the '60s. Versions had been available for nearly a century. However the IBM Selectric – first introduced in 1961 – mesmerized everyone who tried it, and generated orders by the tens of thousands. (In comparison, Bell's Picturephone service had a mere hundred subscribers by 1973, a number that subsequently dwindled to nine.)
Picturephone - Opening Ceremonies, June 24 1964. Courtesy of Alcatel-Lucent / Bell Labs.
Both of these '60s technologies are currently on view at the New-York Historical Society as part of an exhibition provocatively called Silicon City: Computer History Made in New York. The compact exhibit showcases hundreds of innovations that originated in and around Manhattan, including the IBM tabulation machine that first automated the US Census in the late 1800s and Tennis for Two, arguably the first video game, invented by a Brookhaven National Laboratory researcher in 1958. Together the objects on view evoke an ecosystem of innovation that antedates Silicon Alley – and implicitly bring up the question of why Silicon Alley can't match its west coast rival as the standard-bearer of technology.
One answer is suggested by the spectacular success of the sleek Selectric. Built around a spinning ball of letters instead of a deck of lever-activated keys, the Selectric was genuinely innovative, a functional breakthrough that improved upon the efficiency of previous typing machines. (According to IBM President Thomas Watson Jr., it was "the most totally distinct invention we’ve ever made as a company.") Even more notable, however, was the way in which it was packaged. The technology didn't dominate (as was the case with Bell's awkward Picturephone). IBM engineer Bud Beattie's breakthrough technology coevolved with Eliot Noyes's stylish industrial design. The Picturephone was magic. The Selectric was magic in context: the future situated in the present.
Eliot Noyes, IBM Selectric Typewriter, 1961. Courtesy of IBM Corporation Archives.
IBM went on to use the Selectric's keyboard as an interface for early computers. But it was in Silicon Valley, with the apotheosis of companies ranging from Apple to Ideo, that the full significance of IBM's achievement was finally understood. Today New York remains a hub of innovation. IBM's own Watson Research Center has recently made major advances in scaling down future computer chips – crucial breakthroughs that literally cannot be seen. Even if the ongoing success of Silicon Valley depends on New York, New York can't compete with Silicon Valley in the public imagination, which is stoked by the creative synthesis of form and function.
Follow me on Twitter, find my latest book, Forged: Why Fakes Are The Great Art Of Our Age, on Amazon, and read this Grist interview about my new diplomatic geoengineering sci/art project, currently at Modernism Gallery, San Francisco.
|
f9f7a764d5c89fc8041c6e61baf6247d | https://www.forbes.com/sites/jonathonkeats/2016/02/04/citizenfour-director-laura-poitrass-whitney-exhibit-exposes-nsa-surveillance-from-a-new-perspective/ | Citizenfour Director Laura Poitras' Whitney Exhibit Exposes NSA Surveillance From A New Perspective | Citizenfour Director Laura Poitras' Whitney Exhibit Exposes NSA Surveillance From A New Perspective
Bed Down Location is military jargon for the sleeping quarters of people targeted for assassination. It's also the title of a new work by Laura Poitras, the filmmaker who worked with Edward Snowden to expose the secret activities of the NSA. Poitras is best known for her Academy Award-winning documentary about the Snowden affair, Citizenfour. With Bed Down Location, she's adapted documentary filmmaking to installation art, a strikingly powerful transformation that is certain to provoke strong emotion when her first museum show opens at the Whitney tomorrow.
Much of the work in Poitras's exhibition takes the form of evidence. In a darkened corridor, she's created peephole views of revelatory documents such as an NSA employee's hand-drawn diagram of an internet surveillance method called shaping, and screenshots of intercepted data collected through GCHQ's classified ANARCHIST program. As Poitras explains in an exhibition catalogue interview, presenting these materials in a physical space rather than a movie has the advantage of making the viewer more actively engaged. "I want to draw viewers into the narrative of the work so that they leave it in a different mind-set from when they entered," she says. "I'm also interested in having bodies in spaces and asking them to make choices, which is not something you can do in a movie theater."
Laura Poitras, ANARCHIST: Israeli Drone Feed (Intercepted February 24, 2009), 2016. Pigmented inkjet... [+] print on aluminum, 45 x 64 3/4 in. Courtesy of the artist.
While her point is valid, her peep show fails to follow through. A darkened room with peepholes doesn't encourage people to make choices; on the contrary, there's nothing purposeful about their movement. More problematic, the peephole concept is needlessly didactic, belaboring the obvious to what will most likely be a self-selecting audience.
Bed Down Location, in contrast, shows the potential of museum exhibitions to take Poitras's documentary evidence in compelling new directions. Video footage of the night sky in countries such as Pakistan and Yemen – nations where the US military conducts nighttime drone strikes – is projected onto the gallery ceiling. Standing or lying below, museum visitors vicariously experience the dark unknown, seeing nothing yet feeling the oppressive presence of invisible drones. The terror inflicted on an entire people, who must face lethal skies every night, is made palpable with footage that would otherwise seem unexceptional.
What to make of it? Wisely, Poitras doesn't say. Instead she gives emotional resonance to an aspect of foreign policy that can otherwise seem as abstract and remote as drone warfare itself. Bed Down Location brings the unknown home.
Follow me on Twitter, read about my latest art project at the Los Angeles County Museum of Art, and pre-order my new book, You Belong to the Universe: Buckminster Fuller and the Future, to be published in April by Oxford University Press.
|
9f09a35f44d49ab471c9576d4c4ada88 | https://www.forbes.com/sites/jonathonkeats/2019/01/08/anna-atkins-nypl/ | Meet The Woman Who Invented Scientific Photography At This Stunning New York Public Library Exhibit | Meet The Woman Who Invented Scientific Photography At This Stunning New York Public Library Exhibit
Nearly eight decades before Man Ray led photography in a radical new direction with his cameraless Rayographs, a British naturalist named Anna Atkins created hundreds of images enlisting similar techniques. Made by arranging plants on photosensitive cyanotype paper, her vivid compositions are as aesthetically beguiling as Rayography at its best, and their blue-and-white palette is more visually intense. But it would be shortsighted to view Atkins as Man Ray avant la lettre. Her magnum opus, British Algae, was the first book ever to be illustrated with photography, and the images within it were the first photographs ever used for scientific illustration.
Anna Atkins (1799–1871) and Anne Dixon (1799–1864), Papaver rhoeas, from a presentation album to... [+] Henry Dixon, 1861, cyanotype. Private collection, courtesy of Hans P. Kraus Jr., New York
Atkins' scientific motivation provides essential context for appreciating her remarkable work, a large portion of which is currently on view in an important New York Public Library exhibit. Prior to the invention of photography, botanical texts were illustrated with engravings, or with pressed plants mounted on blank pages, methods that were so laborious and expensive that many important treatises depended exclusively on written descriptions. Personally acquainted with William Henry Fox Talbot and John Herschel – two of photography's inventors – Atkins recognized an opportunity to share her collection of pressed algae specimens through a rudimentary form of photo-reproduction in the early 1840s.
Her technique was highly innovative, and utterly impractical. Each page of every copy of British Algae had to be produced entirely by hand. (After the paper was coated with a mix of ammonium citrate and potassium ferricyanide, Atkins set down her algae sample, exposed the page to sunlight for ten or fifteen minutes, brought it back indoors and developed the sunprint in water.) Over an entire decade, only about a dozen copies of British Algae were completed, testament to the arduousness of the process.
There was a deeper problem for scientists who wanted to use her book. As products of transmitted light, photograms precisely record the contour of objects placed on them, and also their translucency, but the effect bears little resemblance to the everyday human perception of light reflected off an object's surface. Also the precision is problematic in its own right, because every specimen has its peculiarities, which are distracting when the botanist wants to see essential features of the species. That's why skillful drawings are still ideal for texts on taxonomy today, even though photography is now perfectly capable of showing organisms as the eye perceives them.
The irony of Atkins' situation is that her technique was too abstract and too realistic: Viewers were thwarted by the abstract quality of transmitted light and the realism with which it depicted her algae. In fact, the abstraction and realism were exactly backward: For effective scientific communication, the portrayal needed to be conceptual and the projection needed to be literal.
Of course none of this belittles Atkins' accomplishment. British Algae was truly pioneering, a photographic innovation in the league of the medical x-ray and a bibliographic breakthrough at the level of the Gutenberg Bible. (Even if botanists didn't ultimately rely on cyanotypes, the sciences were utterly transformed by photography and photo-reproduction in books and journals.)
Moreover Atkins seems to have recognized the artistic potential of her scientifically flawed system. Following the publication of British Algae, she spent years making photograms with ferns and feathers that fully explored the visual possibilities of depicting familiar objects in unfamiliar ways. After inventing scientific photography and the photographic book, Atkins plunged into the photographic paradox of specificity and vagueness that would later consume Man Ray.
|
dd7039723cd8ad31522efb226bc63015 | https://www.forbes.com/sites/jonathonkeats/2019/10/28/design-of-dissent/ | The Untold Story Of How Wall Street Was Occupied: Discover The Art Of Dissent At The Museum Of Design | The Untold Story Of How Wall Street Was Occupied: Discover The Art Of Dissent At The Museum Of Design
To occupy Wall Street, a graphic designer named Will Brown crowned a statue of a bull with a ballerina. This simple visual juxtaposition, first published in Adbusters in the summer of 2011, became the icon of a political movement that seized a lower Manhattan park for months and sparked more than a thousand protests against financial inequality worldwide.
Occupy Wall Street. Poster by Will Brown for Adbusters, 2011. Will Brown
Brown’s famous poster is now one of the highlights of a Museum of Design exhibition exploring the political power of graphic arts from the Vietnam era to the present. Curated by the master designers Milton Glaser and Mirko Ilic, The Design of Dissent provides an opportunity to observe how and why some images galvanize people, and may even motivate fundamental societal change.
One of the most familiar posters in the show was designed to protest the torture of inmates at Abu Ghraib during the Iraq War. Appropriating a leaked soldier’s photo of a hooded prisoner standing on a box with wires connected to his fingers, the design collective Copper Greene transformed it into a satirical ad imitating Apple’s 2003 iPod marketing campaign. The iRaq poster was soon wheat-pasted all over New York and other major cities, often in the same places that Apple posted the original ads.
The posters were subversive on several levels. In the first place, their resemblance to the Apple campaign allowed Copper Greene to situate them in places where more conventional protest posters might be swiftly torn down. This visual camouflage may also have attracted eyes that would turn away from more conventional anti-war imagery (or disturbing photographic documentation such as the image from Abu Graib). Finally there’s the level of meaning, the graphic as a form of culture-jamming: By making a product out of the Iraq war, Copper Greene drew a connection between militaristic violence abroad and complacent consumerism at home. To the extent that the Iraq War was fought over oil, and that Middle Eastern oil fueled the American economy, most everyone in the US was buying into the policies that led to torture and death.
Guerrilla Girls appear in Onassis Cultural Center (OCC) in Athens, Greece, March 9, 2017. Guerrilla ... [+] Girls is an anonymous group of radical feminist, female artists who put on gorilla masks to tackle sexism, racism and corruption in politics, art and pop culture. The group formed in New York City in 1985 with the mission of bringing gender and racial inequality into focus within the greater arts community. (Photo by Giorgos Georgiou/NurPhoto via Getty Images) NurPhoto via Getty Images
MORE FOR YOU25 Years Later, ‘Steamed Hams’ Remains The Greatest Meme ‘The Simpsons’ Ever InspiredSasha Samsonova’s Sensual Photographs Push Boundaries Of Femininity, Inspired By Helmut Newton And Richard AvedonCaviar, Champagne, No Masks: The VIP Clandestine Dinners In Lockdown Paris Causing An Uproar
Appropriation is a common tactic in posters of dissent, perhaps second only to in-your-face agitprop. Dissent often depends on reframing mainstream assumptions, revealing prejudices in plain sight. In appropriated imagery, the familiar is turned against itself. Caught off-guard, the viewer is made to reflect on latent biases.
This tactic can be seen in many posters at the Museum of Design, such as the Guerilla Girls’ 1989 broadside against sexism in museum curation. Jean-Auguste-Dominique Ingres’ nude Grande Odalisque is given the face of a gorilla. “Do Women Have to Be Naked to Get into the Met. Museum?” asks an accompanying headline.
The Occupy poster also has an element of appropriation, since the bull is an icon of the stock market (and the statue is itself a fixture of Wall Street). However Will Brown’s graphic – which originated when he was asked to design an Occupy poster while simultaneously reviewing an unrelated photo shoot of a dancer – lacks the satirical edge enlisted by the Guerilla Girls and Copper Greene. Instead he enlisted qualities less frequently found in the genre. Brown explains it well in a recent email exchange with Forbes: “Most protest imagery is violent and aggressive, and this turns off a lot of people who want to participate,” he writes. In contrast, his poster turned on “a positive, elegant metaphor: an allusion to David versus Goliath or the mouse and the bull from Aesop's fables.”
What makes the Occupy poster exceptional is the way in which it reframes the terms of engagement. The power of a charging bull is no match for the grace of a ballerina. Change appears to be inevitable.
|
92198cba0297ec6572dec64699ecd25f | https://www.forbes.com/sites/jonathonkeats/2020/03/25/net-art/?ss=forbeslife | As Art Fairs And Galleries Take Refuge Online To Elude COVID-19, Internet Art Is Emerging To Temper The Lockdown | As Art Fairs And Galleries Take Refuge Online To Elude COVID-19, Internet Art Is Emerging To Temper The Lockdown
When two hundred and thirty of the world’s leading art dealers convened for Art Basel Hong Kong last week, they took one major precaution to prevent the spread of COVID-19. Instead of showing up in person, they set up their booths online. Clients were also physically absent, viewing major works by blue chip artists ranging from Jeff Koons to Lisa Yuskavage on computers and smartphones. For several days, artworks with price tags as high as several million dollars were available for purchase: the apotheosis of retail distancing, and a model for commerce that will likely to be followed by major art fairs for the foreseeable future.
Individual galleries are also opening coronavirus-proof online viewing rooms, swiftly developing a parallel universe for art sales that rivals the quality of the virtual wings launched by the world’s leading museums. Even with the closure of physical exhibition spaces, Jeff Koons is not likely to file for bankruptcy anytime soon, nor will his gallerist, Larry Gagosian. For other artists, however, the situation is far more precarious. Many thousands will graduate this year without the BFA and MFA exhibitions that have customarily launched careers.
Why not give them the Art Basel treatment? That’s essentially the solution put forward by an Art Academy of Cincinnati adjunct professor named Benjamin Cook. Lacking the resources to build a dedicated website, Cook has improvised, adapting the online viewing room of choice for many struggling artists: Instagram.
Martian Mother (2019), by Elizabeth McGrady, as exhibited on the Social Distance Gallery Instagram ... [+] page. Elizabeth McGrady
Launched on March 13th, Cook’s Social Distance Gallery hosts BFA and MFA exhibitions that were supposed to take place in the physical galleries of schools where the students will soon graduate. The online gallery has already posted dozens of exhibitions featuring the work of artists emerging from institutions ranging from Duke University to Converse College. Several are added each day.
The genius of Cook’s project lies in its scale. Like Art Basel, it leverages overlapping audiences. Social Distance Gallery already boasts more than 17,000 followers, most likely reflecting the social networks of the participating artists, as well as the enticement of people broadly interested in emerging artists who have been given the opportunity to see many they likely would never encounter in person. (The gallery’s regional representation is especially impressive. There are artists attending schools in Arkansas and Wisconsin and Kansas, appearing next to people living in artistic epicenters such as California and New York. Although place may matter for these artists creatively, the geographic isolation that limits viewership is annihilated by erasing geography.)
As in the physical world, quality varies. Unlike the real world, Instagram makes quality difficult to assess because the experience of physical artifacts is mediated, and unlike a Lisa Yuskavage painting that bears some resemblance to those in museums, few but the artist’s own family are likely to have any physical point of reference.
Cook tacitly acknowledges this challenge in his own artwork. A recent series called Some Walls Are Made Of Bricks is designed specifically for Instagram. High-resolution scans of his paintings can be purchased to “hang” (or post) on a personal Instagram page, decorating it in a manner equivalent to how physical artwork has traditionally decorated people’s homes. Each work is limited to an edition of ten, and priced at three dollars (a fraction of the price of a Koons or Yuskavage). Cook sees his project as a way to undermine binary distinctions between the physical and online worlds. “Rather than using catch-all ideas as scapegoats,” he writes on his website, “we should begin to look at how specific protocols, policies, ideas of data ownership, and threats that manifest online, can be approached with the same level of seriousness that we already offer problems of a physical nature.”
What Deals Can I Make In The Dark? (2019), from the series Some Walls Are Made Of Brick, by Benjamin ... [+] Cook. Benjamin Cook
Although Cook certainly doesn’t solve the problem of making a living as an artist without access to a physical market, Some Walls Are Made Of Bricks does nominally show that conventional economic models can be adapted to unconventional circumstances provided that the unconventional gambit is internally coherent. There’s no reason why prices for Instagram art can’t be increased. (Matthew Barney’s limited-edition videos are worth a pretty penny.) The question is whether the legacy systems of the art market deserve to be preserved even if appropriately repackaged. Might the online-only mandate of life under COVID-19 instead be seen as a challenge to reconsider all parameters and even to reset the boundary conditions of artistic production?
There is a long and impressive history of art made specifically for the internet. In fact, for all of the admirable effort that major museums have put into going virtual, the most successful online exhibition right now is Rhizome’s retrospective of net art. Much of this work is interactive, using the tools of the period in which it was created, including browsers that must now be digitally emulated. This work satisfies one desiderium, namely that it is native to its online context. However another desirable quality is also suggested by the circumstances of social distancing, which are not likely to be limited to this pandemic given the likelihood of more epidemics in the future: the desiderium that the experience of online art also have an offline dimension, some sort of presence in our flesh-and-blood world. Ideally this will take place through interactivity, counteracting isolation by facilitating connection with one another in relation to the planet we all share.
Reconstruction of Eduardo Kac, Reabracadabra, 1985. Animated poem for Videotexto. Photo courtesy the ... [+] artist. Eduardo Kac
A website called the Concept Bank curated by the artist Frans van Lent, shows one way in which this can be achieved. The Concept Bank is essentially an online clearinghouse of instructions for performance artworks that can be carried out by one or more people, where the performers may be the only audience. In some cases, the works for multiple performers do not require physical proximity. Collective performance at a distance creates and reinforces liminal connection. And because the Concept Bank is online, it’s equally accessible to everyone.
Of course the Concept Bank isn’t making a living for any of the contributing artists or performers. (I’ve contributed work myself, and can vouch from personal experience.) That’s why systemic change must go deeper. The Metropolitan Museum of Art has just endorsed a proposal from the American Alliance of Museums for Congress to allot $4 billion to support nonprofit arts organizations as part of the anticipated $2 trillion COVID-19 stimulus package, promoted on social media with the hashtag #CongressSaveCulture. The money is desperately needed. The Met expects to lose $100 million during its closure, and AAM predicts that 30 percent of museums may not have the resources to reopen.
Congress should provide the cash. (Culture is not a luxury but a necessity, a humanizing force all the more crucial in times of crises.) The funding should however also serve as a precedent for permanent large-scale governmental support of artistic creation and dissemination as part of a larger political shift toward greater support of public services. (Culture is especially necessary in a democratic society as a basis for collective decision-making and protection of minority interests. If the government pays politicians and spends money on elections, it needs equally to pay for the operating system that brings about consensus. Artists are at least as indispensable as governors and senators, and arguably more so as society emerges from the trauma of a global pandemic.)
There is nothing wrong with Art Basel per se, nor with the high-concept commodities manufactured by Jeff Koons. And artists such as those on the Social Distance Gallery Instagram page should certainly have access to the market, as Benjamin Cook is generously providing them, in order to sell their paintings and photographs. Tangible art will always have a place in our world, as will physical galleries and museums.
But at this moment there is a convergence of opportunity and necessity that can be met by approaches to art that have heretofore been marginal. The virtue of net art is in the network, and the most valuable concept presented by the Concept Bank is the concept of a concept bank, an electronic commons for conceptual connection with the potential for physical manifestation. The radical reappraisal of government spending and public needs under COVID-19 can bankroll these art forms. The future health of our republic depends on it.
Read Part I, Part II. and Part IV in this special coronavirus series on art in the age of COVID-19.
|
f6aa8ab7ed319f7fe836ae1765d0fa29 | https://www.forbes.com/sites/jonathonkeats/2021/02/11/is-twitter-really-offering-verified-badges-for-san-francisco-homes-an-artists-satire-nearly-starts-a-civil-war/?sh=7b0085772ad1 | Is Twitter Really Offering Verified Badges For San Francisco Homes? An Artist’s Satire Nearly Starts A Local Civil War | Is Twitter Really Offering Verified Badges For San Francisco Homes? An Artist’s Satire Nearly Starts A Local Civil War
When a San Francisco neighborhood placed boulders on the sidewalk to obstruct homeless encampments, a young artist named Danielle Baskin turned to Craigslist in protest. “Hi! We’re getting rid of our beautiful collection of landscaping rocks,” she wrote in a faux Free Stuff listing. “We realized we don’t have enough space for them in our own home.”
Baskin’s satirical offering was flagged almost immediately, so she put a screenshot on Twitter, where her posting swiftly went viral. A couple nights later, six of the boulders were found to have been rolled into the street. Residents put them back. The cycle repeated until finally the people who placed the “anti-homeless boulders” asked the city to remove them permanently: free stuff for the SF Department of Public Works.
Although there may not have been a direct connection between Baskin’s listing and the guerrilla action leading to the neighborhood’s surrender, an interview with Baskin in The Washington Post suggests that her prank was a major inspiration for it. And a new hoax by Baskin shows that the virality was no fluke. Last week Baskin created a website for Blue Check Homes, a new service that purports to offer custom-built “verified” badges for San Francisco houses occupied by “an authentic public figure”. More than five hundred people have applied for the crests, which mimic the coveted checkmarks that Twitter applies to verified accounts. At least as many people have responded to the Blue Check service with vitriol.
Blue Check Homes. Image courtesy of Danielle Baskin. @djbaskin. Danielle Baskin
Baskin’s artistic practice is multifaceted. She produces oil paintings and sculptures and consumer products and companies. Many of her works are not exclusively or even primarily art. For instance, Dialup is a viable business with a marketable product: a “voice-chat app that connects friends or people you want to meet in recurring topic-based calls”. Even her most overt satires have a way of finding customers. For example, she’s sold fruit printed with logos as edible “eco-friendly swag” to corporations including Salesforce.
As Baskin suggests on her website, this ambiguity is inherent to her creative process. “I make viral art, companies, and delightfully weird events,” she writes. “Most began as jokes or art projects (and some remained that way). But many of them turned into businesses.” This fluidity is perhaps best understood in terms of her background in immersive theater, which occupies the twilight zone between make-believe and reality. “I like figuring out new ways to facilitate conversations through delightful and strange experiences,” she explains. “I'm excited about building stuff that makes people interact with the world differently.”
MORE FOR YOUPrince Philip Dies At 99: A Long Royal Life In PhotosSunday Conversation: Chelsea Clinton On Her New Health-Focused Podcast, Vaccines, The Politics Of Science And MoreVideo Premiere: L.A. Duo Holy Wars Turn The American Dream On Its Head With Powerful ‘TV Dinner’
Not all immersive theater is a prank, nor is every prank a work of immersive theater. What makes Baskin’s internet pranks interesting in artistic terms is their way of activating society, prompting people to reflect and interact on the basis of semi-plausible alternate realities. Like a social media company for telephone or even a manufacturer of edible corporate swag, they present scenarios that are adjacent to everyday experience in the interest of questioning common assumptions – and compel people to take her fabrications seriously because they might just be reality.
Branded Fruit. Image courtesy of Danielle Baskin. @djbaskin. Danielle Baskin
The proposition advanced by listing anti-homeless boulders on Craigslist as free stuff is relatively straightforward in its absurdity: Any sensible person would look at boulders strewn across a sidewalk as unwanted property like an abandoned sofa or dresser. Putting boulders on the sidewalk to block the presence of unwanted people – and treating fellow humans as unwanted – is freakish by comparison.
The significance of Blue Check Homes is more complex, because meaning is made through the ways people respond to her offering. Status symbols have been around for millennia. (Indeed, Baskin’s concept was inspired by the faux-aristocratic crests on Victorian houses in San Francisco, which a friend of hers flippantly referred to as “the blue check before Twitter”.) The internet has turned out to be the natural habitat of status symbols because symbols and status are both inherently virtual, and they have therefore proliferated online at an ever-accelerating pace. Porting a status symbol from the internet to the physical world is awkward, and accentuates all of the unnoticed weirdness of prized emblems such as blue checks. Baskin enriches this awkwardness by also porting all of Twitter’s rules for recognition of “authentic public figures”.
The push and pull between people who desperately want blue checks on their houses and those who find the putative crests deplorable sets up a valuable conversation about values and value in a city that is itself an emblem of privilege gone awry (and homeless people forced to vanish in order to make room for beneficiaries of a virtual economy). There is enough suppressed irony in San Francisco to make a satirist cry. Instead, Baskin has surfaced a latent desire in all its grotesqueness, sending her boulder-free city into self-directed therapy.
|
80c906d85b0b3fbc8b6b96b5a2788dff | https://www.forbes.com/sites/jonathonkeats/2021/02/26/an-epic-exhibit-of-historic-magazines-gives-a-new-reading-on-todays-troubling-social-media/ | An Epic Exhibit Of Historic Magazines - From The Saturday Evening Post To Mac World - Gives A New Reading On Today’s Troubled Social Media | An Epic Exhibit Of Historic Magazines - From The Saturday Evening Post To Mac World - Gives A New Reading On Today’s Troubled Social Media
Decades before he authored the dictionary for which he is now most famous, Noah Webster founded a periodical known as the American Magazine. Although the publication survived for scarcely a year, this “miscellaneous collection of original and other valuable essays” was intended to foster nothing less than the birth of a nation. “America must be as independent in literature as she is in Politics, as famous for arts as for arms,” Webster wrote to a friend in 1783, holding that writing was “the principal bulwark against the encroachments of civil and ecclesiastical tyrants”. Launched four years later, his periodical sought to build that bulwark in prose and verse.
For more than two centuries, American magazines have played the role that Webster envisioned, albeit not always in ways that he could have foreseen or would have been likely to condone. A remarkable exhibition at the Grolier Club in Manhattan – which can also be viewed online and more carefully examined in an accompanying catalogue – provides an impressively comprehensive a view of this important fold of U.S. history.
The Masses. New York: The Masses Publishing Company. August 1917. Collection of Steven Lomazow, M.D. ... [+] Image courtesy of The Grolier Club. Collection of Steven Lomazow, M.D.
The fact that these magazines were all collected by one person makes the exhibition all the more remarkable. Over the past five decades, the neurologist Steven Lomazow has amassed more than eighty thousand issues of seven thousand different publications ranging from the General Magazine published by Benjamin Franklin in 1741 to more recent (and successful) gambits by Martha Stewart and Oprah Winfrey. Included in the collection are the first issues of legendary periodicals such as Time and Life and Playboy and Rolling Stone and Ms. The collection is equally committed to lesser-known domains, including the so-called little magazines that published the literary avant-garde in the early 20th century, and periodicals dedicated to abolition, prohibition, and other political causes.
There is ample entertainment value in looking at old magazines with the benefit of hindsight, which can register as awe or smugness depending on whether the editors were prescient or foolhardy. (In the former category is the September 1983 issue of Island, which was the first magazine to feature Madonna on its cover. In the latter is the January 15, 1929 issue of Forbes, which featured “A Forecast of Business and Finance” illustrated with icons of prosperity including a loaded moneybag.) Nostalgia is another enticement here, whether induced by a Saturday Evening Post cover illustrated by Norman Rockwell or the premier issue of Mac World with a photo of Steve Jobs leaning over his new products with the smirk of an appliance salesman. And then there is the aesthetic pleasure of looking at stunningly designed periodicals such as The Lark and Flair, not to mention the art published in Alfred Stieglitz’s legendary Camera Work.
The Saturday Evening Post. Philadelphia: Curtis Publishing Company. May 29, 1943. Collection of ... [+] Steven Lomazow, M.D. Image courtesy of The Grolier Club. Brandon Rodkewitz / Collection of Steven Lomazow, M.D.
MORE FOR YOU‘Horrible’ And ‘Ugly’: Why The Gucci Family Dislikes Upcoming ‘House Of Gucci’ Movie With Lady Gaga And Al PacinoTwitter Reacts To The Dramatic Return Of Chrissy TeigenHBO’s ‘Q: Into The Storm’ Exposes The Strange Truth Behind The QAnon Delusion
However the most enduring interest of the Grolier exhibition, and the greatest value of Lomazow’s collection, can be found through comparison of publications, and their relationship to the arc of American history. For instance, to see how 19th century agriculture magazines pioneered the inclusion of contributions from readers is to appreciate how farmers built community across vast distances. Or to observe the antithetical vernaculars of ‘50s literary and pulp magazines – and to look at them in relation to the eclecticism of general interest magazines a century earlier – is to witness a process of cultural balkanization that still has ramifications today.
“American magazines sustained both a centripetal movement toward a common center and a centrifugal movement toward many distinct, often intersecting, sometimes opposing communities,” observes UC Berkeley sociologist Heather A. Haveman in the introduction to the exhibition catalogue. Haveman’s essay focuses on the early years of American magazines, but Lomazow’s collection shows that her point is applicable to any period.
This dynamic between the centripetal and the centrifugal forces continues to play out in online periodicals, as well as the social media platforms that might be seen as accelerated modern permutations on reader-authored 19th century agricultural magazines. The vitality of this interplay validates Noah Webster’s vision. The viciousness of it puts Webster’s faith in doubt. To the extent that magazines made America, they have made America precarious.
But their enduring presence as physical artifacts invites rereading. And rereading will make us better writers of a future in which America might finally become as famous for arts as for arms.
|
4da0fe8efa1bfb8471bf33e493db439b | https://www.forbes.com/sites/jonathonkeats/2021/04/06/this-exquisite-exhibit-of-traditional-japanese-carpentry-can-teach-america-how-to-build-back-better/ | This Exquisite Exhibit Of Traditional Japanese Carpentry Can Teach America How To Build Back Better | This Exquisite Exhibit Of Traditional Japanese Carpentry Can Teach America How To Build Back Better
As a prayer for recovery from illness, the Japanese emperor Yōmei once vowed to erect a temple in honor of the Buddha. Although Yōmei did not live to see it built, the Hōryū Temple survives to this day, one of the oldest wooden structures in the world, sustaining more than 1,300 years of continuous worship, supported by 7th century Japanese craftsmanship.
The cultural significance of Hōryū-ji is widely recognized. The larger complex of Buddhist monuments to which it belongs has been inscribed as a World Heritage site, and is a prime destination for tourism. However appreciation of Hōryū-ji does not require travel to the Nara Prefecture of Japan. An arresting take on the extraordinary architecture of Hōryū and other traditional Japanese buildings is currently on view at the Japan Society in New York, where it’s optimally positioned to address a prayer for recovery that makes Yōmei’s vow seem modest: the Biden Administration’s $2 trillion bill to rebuild American infrastructure.
Wide-blade rip saw (maebiki-oga), angle type Courtesy of Takenaka Carpentry Tools Museum Takenaka Carpentry Tools Museum
The timeliness of the Japan Society exhibit may not be immediately apparent. Displaying antique Japanese saws and chisels, as well as examples of traditional wood joinery newly crafted using historical tools and techniques, When Practice Becomes Form offers a behind-the-scenes look at the tōryō – the master carpenter – and what the tōryō did to make structures such as Hōryū-ji possible.
The carpentry tools and joints are exquisitely beautiful, qualities all the more notable because the joints are typically hidden in Japanese buildings and the tools are carried away when the work is completed. The joints show the care of Japanese construction, ingeniously designed for material strength and future maintenance. The tools evoke the precision of workmanship in their specificity and their diversity. (A full set, handmade by the tōryō to fit the shape of his own body, might number 180 pieces.)
It’s easy to romanticize this vision of the past, and to overlook the fact that most Japanese carpentry today utilizes power tools and prefabricated metal fittings similar to the inventory at Home Depot. Neither the Hikimawashi-noko nor the Ōsakajyō-Ōtemon-Tsugite is likely to play a decisive role in the construction of future airports and bridges. The relevance of the Japan Society exhibition to American infrastructure plans is not so much literal as philosophical.
MORE FOR YOUSunday Conversation: Greta Van Fleet On Pissing People Off, The Politicians They Want To Do Acid With And MorePolaroid Go Is The Coolest New Camera For Young CreativesShonto Begay: ‘Art Saves Lives’
Lapped gooseneck mortise (koshikake-kamatsugi) Courtesy of Takenaka Carpentry Tools Museum Takenaka Carpentry Tools Museum
The disintegration of American infrastructure is a consequence of many shortcomings, both political and economic, but one of the most perniciously pertinent is that Americans deem infrastructure to be expendable. Too many people believe that old building stock should be bulldozed to make way for new developments. As a result, too few structures are built to last, and too little effort is expended to maintain them. The enormous environmental impact of new construction makes this position literally and figuratively unsustainable. If the Biden administration is to be ecologically responsible while fulfilling its commitment to good jobs and social justice, the President will need to consider the full lifespan of new infrastructure: to make structures that are both enduring and adaptable.
The long-term approach exemplified by the Hōryū Temple – and other Japanese masterworks such as the 17th century Kintai Bridge in Yamaguchi Prefecture – can guide the values of the impending American infrastructure bonanza, even if the forms and materials are different. And the Japan Society exhibition can help to guide those values through careful attention to the traditional practice of Japanese carpentry.
What stands out especially is the role of the tōryō in all aspects of a project. From the design of a temple to the selection of trees from the forest to the physical construction of individual joints to the repair work needed after a storm, carpenters were masters of the entire process, and therefore capable of understanding the consequences of every decision. The specialized knowledge of modern materials science and industrial engineering may not allow for that level of hands-on engagement – let alone encourage personal fabrication of the tools of construction – but the narrowing of expertise and experience comes at a cost that must be considered before we start building. The architecture of Hōryū-ji may belong to the past, but its unsurpassed endurance should give us pause before we thrust modern construction techniques into the future.
|
dab0a0e4286837aea831cd4e8a4e17ab | https://www.forbes.com/sites/jonbittner/2011/12/07/do-you-vacation-like-a-capitalist-or-a-socialist/ | Should You Vacation Like A Capitalist Or A Socialist? | Should You Vacation Like A Capitalist Or A Socialist?
How Do We Share The Cost?
"Should we split the check?" This dinner table cliché is just one part of an age-old sharing conundrum: should group costs be divided evenly among all parties, or allocated based on actual usage?
On holiday vacations with friends or other families, we are frequently confronted with these anxiety-inducing decisions. Items like lodging and transportation are often paid in advance, and a single person will often purchase groceries or supplies for the whole group. To add to the confusion, people will occasionally lend each other money ("Don't worry, I'll cover you and you can pay me back").
It’s never pleasant to deal with these questions after the fact, so allow me to present some original research and a helpful tool that should take the stress out of settling the bills. While some cost-sharing choices remain a matter of personal taste, there is broad consensus on how to share the most significant vacation expenses.
I will detail my findings in a two-part series. This research is based on a detailed questionnaire answered by 105 users of Splitwise (a group expense tracker, of which I am a co-founder).
Some Expenses Are More Equal Than Others
Nearly everyone surveyed agrees that groceries, cleaning supplies, gas and rental cars should divided and shared equally, while restaurants, takeout meals, and activity costs should be paid for individually (more on groceries in the next post).
The "socialist" items at the top of the chart are less expensive, and individual taste doesn’t play much of a role in what is purchased. The "capitalist" items lower on the list are more expensive and individuals have very different tastes and opinions about what should be purchased. This seems like a good general principle for categorizing group expenses.
Sharing alcohol is a potential source of conflict. People who don’t drink much will not appreciate being billed for a lot of alcohol, but no one wants to keep track of exactly what they drank over a multi-day vacation. Our survey shows it is best to discuss this ahead of time and set expectations. A nice compromise between usage-based and communal approaches might be for everybody to buy or bring a little more than what they expect to drink themselves, and then share all alcohol communally so as not to worry about it in the moment.
How To Share A Vacation House
Shared lodging costs (a beach house, a ski lodge, a motel, etc.) may be your single biggest expense. Some properties are rented by the day, while others are rented by the week, or by the month. What factors should be taken into account?
While other factors may play a minor role, over 85% of our survey respondents think that the length of stay should be factored into how a house should be split. However, what if the minimum rental period is a week, and you can only stay four days? Is it fair for you to pay less, since everyone had to agree to pay for the week? If you stay for four days while others stay for six, how do you calculate the price for each person?
We studied this in a separate question, and built a shared travel calculator that allows you to calculate the cost of lodging under any scenario.
The answer is that typically, each person should pay proportionally to the number of nights they were there. This system is robust, because anytime someone stays in the shared accommodations for an additional night, the per-night price goes down for everyone. To calculate cost this way, we total how many people stayed over on each night, and create a grand total for the whole vacation. We then divide the total rental price by that sum to get the per-night price for each person (if this is confusing, go to the travel calculator and play around with it).
The alternative is setting a fixed nightly rate, based on the total cost divided by the number of nights, and then divvying that number up based on how many people are there on a particular night. This method ends up making some nights more expensive than others, which works well for hotels and other places that are rented by the night. However, it is problematic for accommodations that are rented by the week. Divvying up each night separately punishes people for using the house during the off-peak period, since those people will be staying in a house that is bigger than they need. In short, it creates an incentive to avoid being in the house when you are the only one there, which is wasteful. Most of our survey respondents agreed.
In some rare cases, sharing a vacation house equally might be the way to go. If you are sharing a house for a weekend and some people arrive Friday night while others arrive Saturday morning, there isn’t much of a difference in value for anyone. Or, if your group trip is over-subscribed, and someone decides they can't stay a full week after their RSVP prevented another person from taking that spot, they should pay for the length-of-time they originally committed to.
In the next post, we’ll cover how to fairly share grocery expenses and bedrooms while on vacation.
For an easy way to keep track of group travel expenses, check out Splitwise, a web and mobile app designed to help people keep track of their IOUs and shared expenses while on vacation.
|
d7448eba0e3c5df639dc342a39537281 | https://www.forbes.com/sites/jonbruner/2011/01/27/mathematical-proof-macbook-pro-is-useless/ | Mathematical Proof the MacBook Pro is Useless on an Airplane | Mathematical Proof the MacBook Pro is Useless on an Airplane
Proof that the 15" MacBook Pro won't fit on your economy tray table: taking into account a 31" seat... [+] pitch, 113-degree seat recline, and a screen tilted 10 degrees beyond vertical, you're left with just over 7 inches for your body.
When I set off for Bucharest on Monday to review a suitcase full of laptops, tablets, and other business travel essentials, I made sure to include the Apple 15" MacBook Pro that I know and love. Forbes gave it to me at the end of last summer, and I rely on it completely when I crack my knuckles and churn out code, infographics, and articles. It runs Flash CS5 and Eclipse with aplomb, and switching to it from the three-year-old brick I'd been using was like walking out of Beijing's smog and into the Alps.
I figured it was likely that I'd gravitate toward my MacBook if things got tough, but it turned out that the new laptops that Dell and HP lent me worked perfectly fine for everything I needed to do. It wasn't until I was sitting down for the final leg of my trip home that I pulled out the trusty MacBook and observed the first failure:
FAILURE 1: The MacBook Pro doesn't fit on any airline tray table that I can afford. The screen, when open, is tall enough that I have to push the keyboard into my stomach to fit it under the seat in front of me. Not a comfortable way to type, and I'm skinny from having had a stomach virus for the first two days of the trip.
To illustrate this problem, I've made the calculations you see above. Lufthansa's Boeing 747-400 economy seats have 31 inches of pitch. Subtract about 8 inches (a charitable guess) for seatback depth, and you've got 23 inches between the seatback in front of you and the cushion behind you. Lufthansa's economy seats also recline to 113 degrees, eating up depth that the laptop can occupy. Assuming you want to tilt your laptop's screen back 10 degrees beyond vertical, you're left with a little over 7 inches between the front edge of the MacBook and the back of your seat.
You should check my trigonometry, but that's pretty consistent with how it feels to use a 15" laptop in an economy seat.
And I was able to check it despite:
FAILURE 2: It wouldn't turn on. When I hit the power button, it churned, the screen lit up briefly, and then it went black again. More hits to the power button got the same result. The battery was fully charged, as the exterior charge lights indicated, and there was no visible damage to the machine. Sure, this is just one trial and can't represent the overall reliability of the product, but my MacBook Pro has now failed on 100% of my business trips to Romania, and that's too much failure when critical business is at hand [even if it did end up switching on more or less normally in my office this morning].
Resiliency and footprint aside, the MacBook remains a member of a different class of computer from the HP and Dell machines that I reviewed, as its price suggests. The extra thousand dollars that you'd pay for my MacBook buy a noticeably better screen, a multitouch trackpad that's vastly superior to every other offering, and an outstanding operating system that works for novices and Unix-hacking geeks alike. But a computer is less than worthless if it stops working 5,000 miles from home, and it's also not great for business trips if you can't use it during a flight. I'll still travel with my MacBook (and will still love and appreciate it once I get over its failure), but will turn to other options when I need to get stuff done in the air.
Ed. note: I wrote this as I flew from Frankfurt to New York on my way back from Bucharest to review laptops, tablets, smartphones and other gizmos. More articles in the series:
Part 1: getting sick and leaving for Bucharest Part 2: passing through Brussels and flying on TAROM Romanian Air Transport with the Dell Vostro Part 3: Bucharest turns out to be delightful; neither bears nor dogs attack me; and time with the Dell Streak Part 4: I miss my flight from Bucharest and rebook in two minutes with the Motorola Droid Part 5: I get inspected in Bucharest and get to know the HP EliteBook Interlude: Frankfurt International, small-scale European exoticism, and my feeble attempts at German Part 6: occupying myself with the iPad plus ZaggMate Part 7: noise-cancelling showdown
|
fe8dc4b994a0185f26e8279dd959d8f1 | https://www.forbes.com/sites/jonbruner/2011/02/03/how-to-build-your-own-political-contribution-database/ | How To Build Your Own Political Contribution Database | How To Build Your Own Political Contribution Database
Image via Wikipedia
Last year when we redesigned the Forbes 400, we included among many nifty widgets one that tells you how much a billionaire has given to each political party. It lets you see that Bill Gates (for instance) has given $40,300 to Republicans since 2007 and $44,400 to Democrats.
Compiling this data was trickier than I expected going into it, and how we compiled it is the subject of a talk I'm giving this afternoon at O'Reilly's Strata Conference. You'll have to either come to Santa Clara, Calif. or watch me online to get the full explanation, but you don't have to watch the talk to try your hand at the FEC's data.
I'm releasing a set of scripts that will build a copy of the FEC's files in an easy-to-use MySQL database on any compatible computer. Run them to download the FEC's flat text files, parse them, and insert them into a database that you can build things on top of. These are similar to scripts I wrote last summer to accomplish the first step in our FEC data-cleansing process; they leave out most of the stuff that I'm talking about at Strata today (those were written by my talented colleagues led by Dmitri Slavinsky and Louie Torres). And, of course, they come with no warranty whatsoever.
The scripts are meant for a semi-technical audience; they're written to be run on a Unix/Linux setup (including Apple OSX) with MySQL and Python installed. If you've got those set up, you can have your own well-organized political-contribution database in under an hour. Here's how to run them.
Prerequisites
You'll need to download and install MySQL Community Server 5.5; Python 2.6 (2.7 is not yet supported); and the MySQL-Python library.
Downloading and Running
Next, create the database that you'll populate (call it "FEC"), and create a user called "FECUser" with password "FECPass" to use the database.
mysql -u myusername -p CREATE DATABASE FEC;
GRANT ALL PRIVILEGES ON FEC.* TO "FECUser"@"localhost" IDENTIFIED BY "FECPass";
FLUSH PRIVILEGES;
EXIT;
Next, download the tarball and unzip it to whatever directory you want. Here's how you would put it in your home directory.
cd /home
curl http://www.forbes.com/jb/forbes-fec.tar > forbes-fec.tar
tar -xf forbes-fec.tar
Go into the new forbes-fec directory and run setup.py. It'll check that it can make a connection to your MySQL setup using the credentials above, create the tables, and prompt you to choose what year you want to start downloading FEC data. The data comes in two-year packages, and files that include presidential election years run about half a gigabyte. Midterm years are smaller.
cd forbes-fec
python setup.py
Once the setup file has run successfully, you're ready to populate your database by running buildAll.py. I recommend running it in the background so that you can check on progress by looking at the log file it creates.
python buildAll.py &
The log is called process_log; it looks a little cryptic, but will at least reassure you that things are happening. And when it's working with the really big individual tables, it tries to guess when it'll be finished with each file.
nano process_log
Once the process is finished, you're ready to start exploring your data and building apps on top of it. You'll have a classic relational database that stores donation records in the individual table, committee details in the committee table, and candidate details in the candidate table. The filerID field in the individual table refers to the commID field in the committee table, and the candID field in the committee table refers to candID in the candidate table.
To retrieve the first 50 contributions, with donor names, employers, occupations, donation amounts, and receiving committees, you could use this:
SELECT nameFirst, nameLast, employer, occupation, amount, commName
FROM individual, committee WHERE filerID = commID;
To get names of committees paired to the candidates they support, you could use this:
SELECT nameFirst, nameLast, party1, commName
FROM candidate, committee
WHERE candidate.candID = committee.candID;
These scripts only accomplish the first step of a long multi-step process that we put this data through to publish it on Forbes.com; I'm essentially just saving you the trouble of parsing the FEC's flat files and organizing them in a way that lets you do cool things with them. And, again, they come with no warranty whatsoever.
I'd like to hear from you if you do something neat with these--please let me know what you're working on!
|
8380116a9fd3e397e92a13f814dcd5b1 | https://www.forbes.com/sites/jonbruner/2011/05/03/trump-beats-palin-on-facebook/ | Trump beats Palin on Facebook | Trump beats Palin on Facebook
Donald Trump overtook Barack Obama and Sarah Palin in Facebook activity last week. Image by AFP via... [+] @daylife
Donald Trump’s birther antics in the closing days of April won him the biggest week on Facebook that any politician (or would-be politician) has ever registered. Between April 24 and April 30, 2011, his fans posted 71,622 comments on his wall. That’s more than twice as many comments as Sarah Palin registered in 2009 during her most active week, and 25% more than President Obama’s Facebook page took in during its most active week at the beginning of April.
The jump highlights the speed with which Trump has made himself into a political character with a substantial following: in one week, his page saw nearly seven times the 10,724 comments that it logged in all of 2010. And his Facebook page remained enormously active through last weekend, even after Obama settled the birth certificate controversy by releasing his long-form birth certificate on April 27.
Trump's rise comes just as Barack Obama's campaign has dusted off his Facebook page, which was dormant for half of 2009 and most of 2010. Judging by recent commenting levels, Obama's page, which still holds the record for most activity in one day for a politician (20,335 comments on November 5, 2008), looks like it will again be a formidable tool for Obama's 2012 campaign.
I came across these figures through a data-gathering exercise I’ve been working on lately. Inspired by a Slate article that discovered Palin’s staff deleting 1 in 10 wall posts, I’ve collected every comment and “like” from the Facebook pages of a handful of prominent political voices to see what I could find. I’ve now got a database with 907,000 comments from Barack Obama’s Facebook wall; 515,000 from Sarah Palin’s; 223,000 from Donald Trump’s; and a few thousand from other, lesser, Facebook presences. I’ll be parsing them in detail in the coming weeks.
Trump’s quick rise aside, Palin’s page is perhaps the most interesting of the bunch. Her Facebook wall has been an extraordinary political spectacle: with close to 3 million fans and more than half a million posts, it has served as an echo chamber for Palin’s every utterance and a demonstration of her drawing power. Palin’s page has become the former governor’s most regular conduit for official statements and has captivated and entertained the country’s ranks of political watchers.
But over the last two years, activity on Palin’s page, which permits anyone who “likes” her to post on her wall, has stagnated. Six of her wall’s ten busiest days were in 2009, in the months after she resigned from the governorship of Alaska a little over halfway through her term. (Palin’s most active day, though, was January 12, 2011, when she posted a video response to the Gabrielle Giffords shooting.) And although her Facebook page is more active now than it was in 2010, it still averages a just over half the monthly postings that it did in the second half of 2009.
The number of people who have “liked” Sarah Palin’s Facebook page—2,931,786 as of this writing—is an oft-quoted figure on Palin’s wall, held up as evidence that Palin’s movement is huge and growing unstoppably. But the commenters who echo her statements make up a vastly smaller group of avid fans. Of Palin’s 3 million “likers,” only 175,716—or 6%—have ever commented on her wall. And just 10% of those who have commented account for 61% of the postings on Palin’s wall, according to figures drawn from my database.
And these fans are avid indeed. In a typical active month, Palin’s most frequent poster comments on her wall between 100 and 400 times. In September 2009, Palin fan Tristan Jameson left 903 comments on her wall. But these periods of manic posting don’t last long: in just 8 of the 33 months since August 2008 has the top poster one month made it into the top 10 posters the next month.
Palin’s biggest fan, if you will, is Greg Maziarz of Meadville, Pa., who has posted 2,064 comments on Palin’s wall and has “liked” 4,816 items on her page. But, like many of Palin’s most frequent users, his interaction with Palin’s page didn’t last long. Maziarz started posting on her page on March 15, 2009. He left 25 comments that month, ramped up to 560 comments in October 2009, and then apparently tired of his correspondence with Palin and her followers, leaving his last comment on Palin’s wall on January 30, 2010. Maziarz did not respond to two requests for comment.
In addition to grabbing comments and user names through Facebook’s API, I also used a bit of screen scraping to find the cities where most of Palin’s commenters live. The result is the series of maps to the right (click on the image to enlarge it). The first thing that strikes me is that Palin’s fan base is pretty urban (as is the population as a whole). For a candidate who plays up her hunting and fishing skills, she’s got a lot of city dwellers and suburbanites among her followers. She’s particularly popular in the corridor that stretches from Winston-Salem, N.C., through Spartanburg, S.C., and into Atlanta; in Florida; and in Ohio, which could serve her well in an election.
|
fbed7385b1449a3dd0cb48001b1cf659 | https://www.forbes.com/sites/jonbruner/2011/07/20/whats-on-justin-biebers-facebook-wall-taking-requests/ | What's on Justin Bieber's Facebook Wall? [Graph] [Taking Requests] | What's on Justin Bieber's Facebook Wall? [Graph] [Taking Requests]
As part of my research for my magazine article on Facebook swallowing up the Web's data, I built a script that scanned the walls of some prominent Facebook users: Barack Obama, Sarah Palin, Donald Trump, Justin Bieber, Lady Gaga, and a handful of others. I was interested in seeing how Facebook activity correlated to real-world events and also in looking for thematic trends: Tea Partiers talking about taxes on Palin's page as their movement rises, birthers on Trump's.
It turns out that the noise on big-time Facebook walls almost entirely drowns out trends like those, but posting volume does spike when wall owners make it into the news. The chart below compares Facebook wall posting volume for Sarah Palin, Donald Trump and Barack Obama. Donald Trump's line is an excellent illustration of modern celebrity: he went from practically no traffic at the beginning of the year to having comment volume that rivaled Barack Obama's in a matter of a couple of months. And once he finished saying what he wanted to say, he returned to the background. Click the image for a larger version.
I also pulled out a few representative stats for these profiles. It turns out that 8.9% of posts on Justin Bieber's wall say some variation of "I love you" (0.5% say some variation of "I hate you"). Only 2.1% of the posts on Obama's wall and 3.5% on Palin's wall express the same sentiment. Want to see the results for another phrase search? Leave your suggestion in the comments below and I'll run your search this afternoon. Here's the breakdown:
Barack Obama Sarah Palin Lady Gaga Justin Bieber Coca-Cola Wall posts since June 1, 2011 190,831 54,113 560,453 1,265,928 72,255 Posts that include "love you" 2.1% 3.5% 8.7% 8.9% 0.7% Posts that include "hate you" 0.5% 0.6% 0.2% 0.5% 0%
So public figures on Facebook find their walls overwhelmed by noise and, as I've written before, by people with short attention spans. How useful is Facebook, then, as a public forum?
|
dd48254d745d088879d8dbffd6e7087b | https://www.forbes.com/sites/jonbruner/2011/08/02/zeiss-a-tool-bereft-of-weakness/ | Zeiss: A Tool Bereft of Weakness | Zeiss: A Tool Bereft of Weakness
Zeiss Victory FL 8 x 42 T* binoculars come in green and black and are said to be bereft of weakness.
If you've always wanted to own one of Carl Zeiss's famously well-engineered bits of optics, but you're not in the market for a planetarium projector or electron microscope, worry not! The German Optik-Hersteller has a delightful line of binoculars that I've been trying out for a few weeks in anticipation of my trip to Arizona, and it's accessible to mortals--or, at least, well-off mortals.
I've been intrigued ever since I stumbled across a blurb that appears repeatedly on sites that sell these binoculars (emphasis is mine):
Zeiss' 8x42 Victory T* FL binocular, the classic birder's configuration, is available in two colors and is the epitome of optical excellence. From any measurement of greatness it succeeds admirably, a tool bereft of weakness which becomes so intuitive with use as to functionally disappear from the user's mindset. This should be the goal of any binocular-to be the invisible conduit to great viewing of the subject. The optics within the 8x42 Victory T* FL binocular were created with elements of abnormal partial dispersion, a chunk of gobbledygook easily translating to the observer as the theoretical limit of razor sharp imagery free of color shift or chromatic aberration, the dreaded bane of bargain binoculars.
"A tool bereft of weakness?" That's the most Teutonic product claim I've ever seen; I can practically hear the company song wafting across the Autobahn, punctuated by the rhythmic pounding of huge glass beer mugs on long wooden tables. (To be fair, that phrase appears nowhere on Zeiss's own website--only on lots of retail sites.) To help us come to a fuller appreciation of this marvelous marketing copy, I've enlisted the help of Forbes art director Kai Hecker, an authentic German, in recording a reading. Listen below.
How can I resist trying out some high-quality German optics when they're promoted this way? I had Zeiss send me two samples to try out on my upcoming camping trip (these will be returned to Zeiss when my trip is over): a pair of full-size Victory FL 8 x 42 T* binoculars and a pair of much smaller Victory Compact 10 x 25 T* binoculars.
So, is Zeiss' Victory binocular actually bereft of weakness? I'd say it's pretty close, if not actually so dramatic as to be bereft. Images through the full-size 8 x 42 binoculars are crisp and bright, with a pleasingly wide angle of view. Colors are very accurate, and focus remains true across the field of view. Everyone who picked these up from my desk, stepped into the hallway, and put them to his eyes immediately said something along the lines of "whoa." If you've only ever used inexpensive binoculars, you'll be truly startled when you look through optics like Zeiss's.
These binoculars are made of fiberglass and armored with thick, knobby rubber, unlike Leica's Duovid binoculars, which are made of aluminum and coated with smooth rubber. Both are undoubtedly durable, but there's something about the Zeiss binoculars that makes them look like they'd perform well as part of some sort of grimy hunting tableau: a wet day spent tromping through the woods in pursuit of whatever it is we tromp through the woods for. The Leica Duovids come off by comparison as something that a secret agent in a BMW would use to gaze across a canyon at an evil lair.
The Zeiss binoculars, owing to their fiberglass construction, are also lighter than the Leicas, which means they'll be easier on the neck of anyone who spends a day tromping through the woods. But with that lightness comes the occasional sensation of plastic: a little bit of extra give here and there where lightweight components move against each other. Whether that's the right tradeoff to make is a matter of personal preference, but the Zeiss binoculars also have price on their side: the Victory FL 8x42 binoculars retail for $1,950, against Leica's Ultravid 8x42 binoculars for $2,100 (Leica's Duovid, which I reviewed, cost somewhat more because they can magnify at either 8x or 12x).
Zeiss Victory Compact 10 x 25 binoculars are genial and pleasant to use.
Zeiss also sent along the little brother to these full-size binoculars: the Victory Compact 10 x 25 T* are a genial pair of binoculars--they're very light and well-made, and as compact binoculars go, they're a pleasure to use. At $700, they're much less expensive than their larger counterparts, but also much more expensive than the typical pair of pocket binoculars. Here, though, the difference between budget binoculars and premium binoculars is particularly apparent: although not as striking as the full-size Victories, the image that these produce is dramatically better than what you'd see through a $50 pair of compact binoculars--usually, these are nearly unusable.
The Victory Compact binoculars are light and well balanced, and are made in Hungary rather than Germany, where the full-size binoculars are made. Like their big brothers, these have an occasional plastic feel to some moving parts, but the reward for tolerating that is the pleasure of carrying around very light binoculars.
|
32af9bc1426360276642e93ca287e0bd | https://www.forbes.com/sites/jonbruner/2011/09/14/virgin-atlantic-will-join-an-alliance-soon-says-richard-branson/ | Virgin Atlantic Will Join an Alliance Soon, Says Richard Branson | Virgin Atlantic Will Join an Alliance Soon, Says Richard Branson
Richard Branson in New York on Tuesday night, unveiling the Bulova Accutron Sir Richard Branson... [+] Limited Edition watch. Proceeds from Branson's endorsement will go to his Virgin Unite foundation.
Virgin Atlantic Airways, the last big trans-Atlantic carrier to remain independent of the three airline alliances that define the industry, will join one in the near future, Virgin founder Richard Branson tells Forbes. "There is likely to be an announcement of an alliance coming very soon," he says.
Virgin has stayed out of the alliance system until now, preferring to write bilateral codesharing agreements with other airlines. Branson declined to say which alliance Virgin would join, but it is unlikely to be OneWorld, the group that includes Virgin arch-competitor British Airways. That leaves Star Alliance, anchored by United Continental Holdings and Lufthansa, or SkyTeam, anchored by Delta Air Lines and Air France-KLM, as possibilities. Of those two, Virgin Atlantic's operations are more closely aligned with Star Alliance: the airline is 49% owned by Star Alliance carrier Singapore Airlines, has expressed interest in buying BMI from Lufthansa, and includes nine Star Alliance members among its partners.
In late 2010, Virgin Atlantic hired Deutsche Bank to advise it on merger possibilities. Sky News reported that Delta had hired Goldman Sachs to help it woo Virgin, but that attempt, along with efforts by some other airlines, appears to have fizzled. Yesterday Virgin Atlantic confirmed that it was interested in merging with BMI, another airline with a hub at Heathrow.
A Virgin Atlantic spokeswoman declined to comment on the airline's alliance talks, writing in a statement that "Deutsche Bank is looking at all our options, which includes the world of alliances. We are one of the very last independent carriers with an unrivaled position in the industry which is why we are carrying out this strategic review."
Virgin Atlantic was one of four airlines allowed to fly between London's Heathrow airport and the United States until the EU-US Open Skies agreement opened those routes to competition in 2008. Its market position has been further challenged by a new round of consolidation in the airline industry--particularly a 2010 agreement between American Airlines, British Airways and Iberia that coordinates those carriers' trans-Atlantic service. Membership in an alliance could feed traffic into Virgin's network and give it a boost with business travelers who try to keep their frequent-flier earnings consolidated within an alliance to enjoy perks like free upgrades to first class and access to departure lounges.
Branson spoke to Forbes on Tuesday night in New York, where he unveiled a new Bulova Accutron watch of his own design. Proceeds from Branson's Bulova endorsement will go to his foundation, Virgin Unite.
|
8260786a4c4e939cb460164e145ec529 | https://www.forbes.com/sites/jonbruner/2011/09/27/billionaires-hedge-their-bets-on-politicians-infographic/ | Billionaires Hedge Their Bets on Politicians [Infographic] | Billionaires Hedge Their Bets on Politicians [Infographic]
Robert Kraft, owner of the New England Patriots, gave $25,000 to Barack Obama and $25,000 to John... [+] McCain in 2008. Photo: Getty Images.
The Forbes 400 list includes plenty of prominent ideologues like David Koch and George Soros who use their wealth to change the direction of political discourse by, for instance, underwriting MoveOn.org or the Tea Party movement. But for evidence of money's role in national politics, take a look at the non-ideologues on our list: billionaires who throw cash at both parties in an effort to win influence or advance their interests in Washington.
129 of the billionaires on our list have given to at least one of six partisan committees that funnel cash into election efforts: the Democratic and Republican national committees, congressional committees and senatorial committees. Their donations to these groups since 2005 total $8.6 million, of which $4.2 million has gone to the Republican groups and $4.4 million has gone to the Democratic groups.
But 18 members of the Forbes 400 have given to both parties' committees. That list includes Donald Trump, who has made donations to all four partisan congressional committees: $92,400 in total to the Democratic Congressional Campaign Committee; the National Republican Congressional Committee; the Democratic Senatorial Campaign Committee and the National Republican Senatorial Committee since 2005.
A donor who honestly thinks a candidate is better equipped to serve the country than his opponent would presumably only support that candidate. But even at the individual race level, we find contributors who give money to both sides--perhaps to ensure access to the eventual winner, whoever he might be, or to pander to friends on each candidate's fundraising committee.
194 members of the Forbes 400 donated to either John McCain or Barack Obama, giving $2.8 million to McCain and $1.3 million to Obama. But 29 billionaires donated to both McCain and Obama. Some of them, like Robert Kraft, Thomas Pritzker and Marilyn Carlson Nelson, gave exactly the same amount of money to both campaigns (Kraft doled out $25,000 to each side). Others left any explanation for puzzling differentials out of their filings: Leon Cooperman bested his own $2,100 contribution to McCain with a $2,300 check to Obama; David Rockefeller gave $19,600 to McCain and $18,850 to Obama, effectively making a net $850 contribution to McCain. Despite finding McCain worthy of $42,300, Sam Zell still thought it worthwhile to give $2,300 to Obama; Amos Hostetter made a similar calculation in reverse, giving $33,100 to Obama and $2,300 to McCain.
My graphic above illustrates some of this aisle-crossing. Included are the ten biggest donors among the Forbes 400; lines between them and political committees indicate donations they've made since 2005, and circle size corresponds to total donations. I've computed partisanship for each donor and each committee by analyzing donation patterns; circles to the left on my partisanship scale represent liberal donors and committees; circles to the right represent conservative donors and committees. These heavyweight donors generally stick within their ideological groups, but even they routinely cross party lines to donate to a partisan fund or two that might help secure a phone call or raise congressional awareness of a pet issue.
These donations amount to revealed preference of sorts: why make political donations that essentially cancel each other out if you're not going to see some return on those donations?
My colleague Clare O'Connor and I assembled a couple of data pages on political donations from the Forbes 400 for the October 10 issue. The issue is terrific, and very much worth running down to your local newsstand for (it's on the shelves now!), but you can peruse the pages on political contributions below.
Follow me on Twitter: @JonBruner
|
c959a8563b050f86c357d63e6fcf82e4 | https://www.forbes.com/sites/jonbruner/2012/01/25/how-los-angeles-keeps-traffic-moving-through-4114-stoplights/ | 4,114 Stoplights in Los Angeles and the Intricate Network that Keeps Traffic Moving | 4,114 Stoplights in Los Angeles and the Intricate Network that Keeps Traffic Moving
This is an extended version of an article that appears in the February 13, 2012 issue of Forbes Magazine.
A wry smile comes across Edward Yu’s face as he gingerly threads a city-owned Prius through a maze of cones in front of Los Angeles’ Nokia Theater. Traffic has been diverted so that workers can install a carpet for the next day’s People’s Choice Awards. “They put in a red carpet every day in this city,” says Yu. A bicycle zips by on the right, cars switch lanes to avoid the construction, and a passing light rail train blocks a line of drivers hoping to make a left turn. “Everyone wants a piece of the streets,” he says.
Yu is a soft-spoken engineer with great power: He sets the timing for all of L.A.’s stoplights. His department has to take it all in: bikes, trains, big events and, of course, lots and lots of cars. Los Angeles has one of the nation’s worst reputations for automobile congestion, but that’s a simplistic way of looking at things. Its freeways are still the most congested in the nation, but L.A. has 36 times as many miles of surface streets as it does freeways. Those streets, ranging from narrow roads winding around the Hollywood Hills to ten-lane boulevards that cut through canyons of office towers, are heavily traveled—the intersection of Sepulveda Boulevard and Venice Boulevard routinely sees 79,000 cars per day, more than many expressways—but rarely gridlocked.
This is thanks to a citywide data-gathering system that’s the largest of its type, run by Yu and his team of 35 engineers and 20 operators. The system—due to be complete early next year—is becoming ever more complicated as a city built for cars rededicates its streets to buses, trains, bicyclists and pedestrians.
Every second 18,000 magnetic sensors embedded in Los Angeles’ roadways send traffic speed and congestion levels to a control room in a former emergency bunker four stories beneath a City Hall annex in downtown Los Angeles. A computer evaluates traffic-light timings at each of 4,114 intersections and sends out by the next second minuscule adjustments to keep cars moving. An operator at an elevated desk in the center of the room can summon up a real-time diagram of almost any intersection in the city.
A large digital leaderboard mounted on a wall shows intersections with unusually high congestion. If an intersection is under observation by one of the city’s 400 live cameras (installed at the most troublesome spots), the operator can pull up video to see why things have gotten so bad. “We figured out long ago that an engineer can’t scan 18,000 detectors continuously to see if a flashing red light is a problem,” says Verej Janoyan, bureau chief in charge of the design and operation of the city’s intersections. “This way, we’ve got one engineer overseeing 4,100 intersections.”
The genius of the system—which the city calls Automated Traffic Surveillance & Control, or ATSAC—is that it’s both automated and adaptive. As congestion builds on one street relative to another, it adjusts traffic-light cycles to give more green time to the congested lanes. At the same time ATSAC builds a rich database of historical traffic statistics used to adjust the timing of signals and to tweak intersection configurations across the system. ATSAC is smart enough to avoid overreacting to a momentary crush of cars. “Our system is designed around patterns,” says Yu. “We need to have lots of data to evaluate so that we don’t make changes that eventually worsen traffic.”
Los Angeles was the first city in America to control its traffic lights centrally when it rewired a handful of intersections near the Coliseum in 1984 in anticipation of a crush of Olympics traffic. The system proved itself quickly: Repeated studies since the 1990s have found that travel times fall by 15% near connected signals and motorists make 20% to 30% fewer stops, massive improvements for a cost of about $150,000 per intersection. Since then the city has built the system outward along fiber-optic lines.
The city has investigated wireless approaches but to no avail. Hughes Aircraft tried for two years to get a wireless system to work. “They told us, ‘This is no problem; we can talk to a robot on the moon,’” says Janoyan. “But this turned out to be harder.” In one case a motion sensor over a grocery store’s automatic door disrupted the signal.
The network has recently had to deal with the addition since 2010 of 187 miles of bike lanes and 18 miles of bus-only lanes that have special priority. ATSAC has had to learn when buses are running late so it can extend green lights for them and to make the buses wait in traffic with everyone else when they’re ahead of schedule. New sensors in bike lanes can flip signals to green when a cyclist approaches, although they still have trouble detecting the most expensive carbon-fiber bikes, which have no magnetic signature.
Walk lights across the city have become more sophisticated, with extended cycles around schools at dismissal time and stadiums after games to accommodate shambling football fans. Special cycles in predominantly Jewish neighborhoods cater to Sabbath rules by operating without requiring a button-push between sundown on Friday and sundown on Saturday. “We’ve got sunset times for the whole year programmed into the system,” says Yu. Los Angeles even has equestrian walk buttons mounted high on poles near riding trails in Sunland-Tujunga.
Operators collaborate with event producers, too. Signals go into special cycles as Lakers games approach, and ATSAC guides limos to the Academy Awards. Yu says it was partly at the city’s insistence that the Oscars were moved from Monday night to Sunday night, so they would not take place in the middle of rush hour.The city developed the software in-house, but it’s flexible enough that Los Angeles has sold it to other California cities, including Long Beach and Gilroy, and it is pitching it to Washington, D.C. A check for $75,000 buys a license for the complete package of prediction and control software—a small fraction of what an outside consultant might cost. Thanks to data, L.A. has turned its liability into an asset.
|
edd1c630d76e9af04cf4229f55eb9b9b | https://www.forbes.com/sites/jonbruner/2012/02/08/hey-wanna-buy-some-influence/ | Hey! Wanna Buy Some Influence? | Hey! Wanna Buy Some Influence?
The amount of money involved in politics—already staggering well before Citizens United v. Federal Election Commission opened the floodgates—is about to surge even higher, making our already-undemocratic campaign system even less accessible. Until 2010, the cost to become politically influential was artificially constrained at what you might call a below-market price: a $2,500 contribution to a campaign to for the primary phase of an election and another $2,500 for the general phase (limits that rise every other year to track inflation). That meant getting a phone call returned by a campaign cost $5,000—a big number that many Americans wouldn't be able to approach casually, but one that's nevertheless accessible to, say, the owner of a car dealership or a particularly passionate activist with a small following.
Now that Citizens United has swept away those constraints, the market in political influence is clearing, with dramatic implications for Washington's accessibility. Daniel Fisher and I write about it in the upcoming issue of Forbes Magazine:
Put simply: Political influence is currently a bargain. “The most undervalued asset in America,” as one influential billionaire told FORBES. [A quarter-million dollars, for which a donor wouldn't get a meeting with the president of Harvard] gets a presidential sitting because campaign laws still cap direct campaign contributions in federal elections at $2,500 per candidate, magnifying the influence of those who are sidestepping the traditional party structure entirely. Until recently those who wanted to curry influence on an industrial scale were stuck funding a byzantine collection of lobbying groups, think tanks and “grassroots” organizations. Witness the Koch brothers—Rube Goldberg would likely be needed to construct the organization chart of their various political efforts. But such inefficiency is no longer necessary, thanks to the Supreme Court’s 2010 decision in Citizens United v. Federal Election Commission, which cemented the idea of political spending as free speech, eliminating a longstanding ban on corporate political spending and effectively ending limits on how much individuals can give to PACs as well. [Continue reading...]
In case you doubt the shift in scale that Citizens United represents, here's my graphic from the magazine that illustrates the order-of-magnitude change in money available to political campaigns now that giving limits have been essentially removed.
With the removal of contribution limits will come greater stratification between campaigns and even greater advantages for rich campaigns against their poorer counterparts. Restore Our Future, the super PAC supporting Mitt Romney, raised $30.2 million in 2011. Newt Gingrich's super PAC, Winning Our Future, raised just $2.1 million (that figure doesn't include contributions totaling $11 million from Sheldon Adelson's family, which came in January 2012). Romney's official campaign committee raised $57.1 million during the same period and Gingrich's raised $12.7 million. Romney still has a big advantage in the realm constrained by conventional giving limits, but only by a factor of 4.5; in his unconstrained super PAC receipts, his advantage over Gingrich is close to 15 to 1.
Romney's super PAC has used its fundraising advantage effectively. In early December, it was the first super PAC to start buying attack ads: when everyone else was shooting each other with BB guns, Romney pulled out a bazooka and fired it directly at Gingrich, who stumbled for two weeks before his super PAC found its own bazooka and fired back.
As a result of Citizens United, politics is now firmly out of reach for most Americans. We asked political strategist Douglas Schoen, Bill Clinton's chief White House pollster, how a wealthy person newly empowered by the ruling might get heard in Washington.
$35,000 "is the minimum ante to be on the radar," says Schoen. That's the legal maximum contribution to an election effort—$2,500 each for the primary and general phases to the campaign committee, plus $30,800 to the national party committee. "You get invited to events and somebody will return your call. It used to be that that had a bigger impact, but you’re not going to be ignored."
$100,000 to $150,000 "makes an impact somewhere," says Schoen. Approach a super PAC with a single issue and a contribution on that scale, and your concerns will make it to party leadership. "You can make a determinative impact on a particular issue."
$2 million to $3 million is enough to "have a huge impact on the party committees. You can influence a couple of races. You can be a serious player and be seen as one of the more important people in the process," says Schoen.
$5 million to $10 million makes you an influential figure in five or ten Senate races. "You will be taken seriously in Washington by every player. In traditional philanthropic giving, that’s a valuable contribution, but the president of the university doesn’t have time for you. The President of the United States does have time for you."
$70 million makes you "as important as anyone in America. You set the agenda. You control the action."
|
05cb37a40984f7945d9f058c3aacfed4 | https://www.forbes.com/sites/jonbruner/2012/02/22/is-sheldon-adelson-funding-newt-gingrichs-attack-ads-graph/ | Is Sheldon Adelson Funding Newt Gingrich's Attack Ads? [Graph] | Is Sheldon Adelson Funding Newt Gingrich's Attack Ads? [Graph]
Casino mogul Sheldon Adelson—the man who singlehandedly saved Newt Gingrich's tanking candidacy with a well-timed contribution to Gingrich's super PAC, Winning Our Future—tells my colleague Steven Bertoni that he insists his money be used for positive campaigning. Writes Steve:
Whomever he supports, Adelson claims he won’t pay for mudslinging. “I don’t believe in negative campaigning. I believe in saying that my opponents are very good people and I’m confident a lot of them would do a good job, but I would do a better job, and here’s why,” says Adelson. “Money is fungible, but you can’t take my money out of the total money you have and use it for negative campaigning.” Of course, that stance ignores the fact that an avalanche of negative ads against Romney won Gingrich South Carolina, and that Adelson’s $5 million injection was the dominant source of his funding. “That’s what everybody says, but that doesn’t mean it’s true,” the billionaire says, waving his hands dismissively. “Most of what’s been written about me in this is untrue.”
Adelson's claim is dubious. He and his wife have given $10 million to an organization that produces both attack and support ads, and it's meaningless to try to trace a dollar from one donor to a particular expenditure. Even if Winning Our Future were earmarking his contributions for support ads, his money still enables the super PAC to spend other donations on attack ads.
In any case, a look at the numbers suggests that Adelson's money has been used on at least some negative campaigning. Winning Our Future took in $13.1 million through Jan. 31, including $10 million from Sheldon and Miriam Adelson. Winning Our Future's filings show that the organization spent $3.9 million on ads opposing other candidates during the same period. Assuming the super PAC spent every penny of every other contribution on those ads, about $800,000 in negative spending must have come from Adelson's contributions. Including the $1 million that Adelson's daughters and son-in-law have given to Winning Our Future, the super PAC has spent $1.8 million of Adelson money on communications that attack Gingrich's opponents.
The catch is that the filings aren't very precise. An advertisement that mostly supports a particular candidate might include a jab at an adversary and still be reported as a supporting expenditure. And a cautious accountant might classify as "opposing" an ad that's mostly positive but mentions an unflattering element of an opponent's record.
Despite its huge outlays on attack ads, Winning Our Future is still much less negative than Mitt Romney's super PAC, Restore Our Future, and somewhat less negative than the average super PAC. Through Feb. 22, Winning Our Future has spent $7.4 million on support ads and $3.9 million on opposition ads: that makes its spending 35% negative. Restore Our Future's spending has been 96% negative, and spending by all super PACs since the beginning of 2011 has been 54% negative. Here's a look at that breakdown:
|
6f5aff695b6cfaa8a48a89eecebe8c75 | https://www.forbes.com/sites/jonbruner/2012/03/22/forbes-interactive-media-map/ | The Interactive Media Map: America's Most Influential News Outlets | The Interactive Media Map: America's Most Influential News Outlets
Click the image above to visit the Forbes interactive media map
Oregonians love NPR; Wisconsinites adore the Onion; the Huffington Post is widely read in Appalachia. These are a few of the favorites that the data team at Bitly uncovered when they parsed data from millions of clicks on their shortened links. We’ve turned their data into an interactive map and an illustration in the April 9, 2012 issue of Forbes.
Visit the interactive media map >>
Bitly's dataset, wrangled by data scientists Hilary Mason and Anna Smith, consists of every click on every Bitly link on the Web. Bitly makes its data available publicly—just add '+' to the end of any Bitly link to see how many clicks it's gotten. For Bitly's collaboration with Forbes, Smith and Mason looked for news sources and individual articles that were unusually popular in certain states compared to national averages. The interactive map starts by showing which news source dominates in each state by this measure: the Washington Post in Virginia and Maryland, the Chicago Tribune in Illinois, and so on. Click on a source to see a heat map that shows where its links are particularly popular, then click on a headline to see where that story did well.
Bitly's research reveals some obvious interest in local issues—a Forbes story about Wisconsin's pensions was widely read in Wisconsin and an Onion article about President Obama was popular in Washington, D.C.—and it confirms some dearly-held stereotypes about media consumption: NPR is popular in Oregon and Minnesota; Fox News is popular in Mississippi.
But the same data points to a sharp division between fully national news publishers that are widely read across the country, like the New York Times, and the largest regional papers—some of which, like the Washington Post, have national aspirations that they've had trouble realizing. The latter remain sharply contained to their traditional markets.
Our friends at Bitly write:
When you share or click a link on a social network like Facebook or Twitter, you’re most likely using a Bitly link. Bitly provides the infrastructure for social sharing across networks and, in the middle, collects a huge amount of data on how real people share ideas. Given the right tools, and by asking the right questions, this mass of clicks can be transformed into useful knowledge about the social web, helping us understand how people use the Internet. For Forbes, Bitly has investigated how people consume news by looking at how people in different states differ in their preference for news sites. Through the clicks of millions of people in each state visiting different news sources, Bitly is able to uncover relationships between geography and media.
This map will become a regular feature: we'll update it at the beginning of every month to include the previous month's hits.
Visit the interactive media map >>
|
c78828596322fd4e00234d78a158fd44 | https://www.forbes.com/sites/jonentine/2012/03/02/new-york-times-reversal-cornell-university-research-undermines-hysteria-contention-that-shale-gas-is-dirty/ | New York Times Reversal: Cornell University Research Undermines Hysteria Contention that Shale Gas is "Dirty" | New York Times Reversal: Cornell University Research Undermines Hysteria Contention that Shale Gas is "Dirty"
Jon Entine is senior research fellow at the Center for Health & Risk Communication at STATS/George Mason University.
There are new twists to in the ever-entertaining faux debate over the dangers of shale gas. The New York Times, which turned obscure Cornell University marine ecologist Robert Howarth into an anti-fracking rock star in its questionable spring series on shale gas, and got hammered for it by its own public editor—I'll take some of the credit—is finally getting on the science bandwagon.
Last April, the Times ran two articles in a week heavily promoting Howarth’s bizarre claim that shale gas generates more greenhouse gas emissions than the production and use of coal. It would be difficult to overstate the influence of this paper, which ricocheted through the media echo chamber and was even debated in the British parliament and the European Union.
When the Times didn’t report then, and until now has almost systematically ignored, is that almost every independent researcher — at the Environmental Defense Fund, the Natural Resources Defense Council, the Council on Foreign Relations, the Energy Department and numerous independent university teams, including a Carnegie Mellon study partly financed by the Sierra Club — has slammed Howarth’s conclusions. Within the field, Howarth is considered an activist, not an independent scientist. But you’d never know that reading the Times’ fracking coverage, with independent lefty columnist Joe Nocera as the notable, and refreshing, exception.
Maybe a little fresh air is finally leaking into the Times insular chambers. Calling Cathles’ report a “fresh rebuttal” of Howarth’s much-maligned study, Dot Earth’s Andrew Revkin cites the latest researcher to diss Howarth’s shaky science by a colleague at Cornell, Earth and Atmospheric Sciences professor Lawrence Cathles, who is an expert in this field, unlike Howarth.
Cathles convincingly demolishes Howarth’s four major claims, two of which we’ll highlight here:
Howarth et al. claimed that shale gas wells are virtual methane sieves. But as Cathles shows, Howarth appears to have deliberately used 2007 data, a century ago by shale gas technology standards. He’s off by 10-20 times—at least. Howarth claimed that emissions during well completions are far greater than for other gas wells. Among other things, Howarth used decades old data from the Soviet Union to make this bogus case.
Cathles conclusion is critical but unremarkable in that it reflects the conclusions of almost every major researcher in the field, except the favorite of the Times, and hardleft advocacy magazines such as Mother Jones: “The data clearly shows that substituting natural gas for coal will have a substantial greenhouse benefit under almost any set of reasonable assumptions.”
According to Revlin: “[T]he notion that gas holds no advantage over coal, in weighing the climate implications of energy choices, is fading fast (to my reading of the science and that of many others.),” he wrote. In fact, the farcical "shale gas is dirtier than coal” claim was never scientifically seriousness enough to fade; it is and was a fiction of activists, including Howarth, whose goal is to undermine a balanced scientific debate on shale gas and climate change.
Although Dot.com writer Revkin may understand the nuances of the shale gas debate, there are no signs the reporters on the print edition of the paper are opening their minds. The questions for the mothership, Mother Jones and other publications, whose reporting so far appears to echo hard left talking points:
Will you report this return to science in your paper or continue to bury it on the web? When will we see the investigative piece airing out the dirty linen that led to Howarth's rigged study, including the funding stream from the Park Foundation, which yearly gives millions of dollars to media organizations and community groups targeted specifically to undermine America's goal to reach a balanced energy future.
Tip to the Times: follow the science.
Jon Entine, senior research fellow, Center for Health & Risk Communication, George Mason University, is founder of ESGMediaMetrics, a green consultancy.
|
9be4b3102dbd7a1b8dc056e798c7c651 | https://www.forbes.com/sites/jonentine/2012/08/20/fda-spygate-new-revelations-challenge-the-new-york-times-investigation-of-agency-enemies-list-raise-more-questions-about-the-governments-most-dysfunctional-agency/ | FDA SpyGate -- New Revelations Challenge The New York Times Investigation of Agency "Enemies List," Raise More Questions About the 'Government's Most Dysfunctional Agency' | FDA SpyGate -- New Revelations Challenge The New York Times Investigation of Agency "Enemies List," Raise More Questions About the 'Government's Most Dysfunctional Agency'
According to Jon Entine, after a series of stumbles and scandals, the Food and Drug Administration’s ability to oversee the most cutting edge sectors of the medical industry, medical devices and genetic screening tests, is under increasing scrutiny.
The Food and Drug Administration and its Center for Devices and Radiological Health (CDRH) is reeling. In mid July, the New York Times accused the FDA of creating a massive e-mail surveillance program designed to net junior scientists and other critics who complained the agency was too-quick to approve medical devices that the employees maintained posed unacceptable health risks.
The Times’ story generated national headlines with its sympathetic portrayal of harassed scientists risking their careers to protect the public interest. But new revelations suggest the Time’s slanted the story by leaving out critical context. It appears that dissident employees are involved in what could be seen as an ambulance chasing shakedown scheme to profit from their allegations. In December 2009, while these “aggrieved” reviewers were publicly lobbying the FDA and Congress to crack down on scanning devices, they had secretly filed a whistleblower lawsuit against these very same manufacturers that if successful could make them multi-millionaires.
The case has exposed the underbelly of what some critics believe is one of the more dysfunctional regulatory agencies in the federal government.
The July New York Times story was a follow up to a 2010 Times report, based on leaked confidential documents supplied by the junior staffers, accusing the agency’s senior officials of ‘brushing aside’ the potential dangers of mammography and colonoscopy devices in a rush to approve a CT scanning device made by General Electric.
The dispute began less publicly in 2006, when a consultant reviewer, Robert Smith, a controversial radiologist formerly at hospitals operated by Yale and Cornell, and several center employees raised concerns that the agency was overlooking safety concerns in approving substandard medical imaging devices for mammograms and colonoscopies. The agency reviewed and rejected their concerns in 2006 and 2008.
Angered when their recommendations were overruled, and just months after quietly filing their whistleblower suit, the dissident group took the matter into their own hands. In early 2010, they began leaking confidential documents to various media outlets, most notably the Times, which came out with its first exposé in March of that year under the headline “Scientists Say F.D.A. Ignored Radiation Warnings”.
FDA employees who review confidential trade secrets submitted by drug or device makers are prohibited from discussing any data before a regulatory decision has been made. But some junior scientists, convinced that current laws make it near impossible to block products they deem ineffective or dangerous, took the matter into their own hands.
Later that year, responding to appeals by Dr. Smith and by Congressmen who the former reviewer had lobbied, the FDA, then under supervision of the Obama Administration, again investigated the employees claims and found them wanting.
Concluding that the workers had violated agency confidentiality outlined in the Federal Food, Drug and Cosmetic Act, the FDA’s Office of the Inspector General (OIG) recommended that the agency take ”administrative action” against the leaders for talking to the Times. Four employees were eventually dismissed. A fifth scientist was suspended, rehired on appeal, and then left the agency last month.
FDA, culture of dysfunction?
The ‘national paper of record’ and many activist groups are now portraying the self-proclaimed whistleblowers as beleaguered heroes and victims of an agency “enemies list” designed to muzzle public minded employees. They hint at a corrosive and corrupt culture inside the FDA that is captured by big business and limits the agency from encouraging dissent.
But now, as the backstory is coming into sharper focus, it appears that culture is far more nuanced. The FDA appears not so much closed as split between various factions, with a minority of junior scientists determined to push an ultra-aggressive regulation strategy even after higher level science reviews consider but reject their input. This grueling internal battle appears to have left the CDRH, the FDA office empowered to oversee innovative medical technology devices, including genetic tests, in disarray.
The latest Times piece coincided with a revised lawsuit filed last month by the dismissed employees pressing claims that the agency had violated their rights to free speech. The story claims that the FDA put spyware in place over the opposition of the inspector general at the Department of Health and Human Services, who, the article claimed, had “found that there was no evidence of a crime, noting that ‘matters of public safety’ can legally be released to the news media.
That appears to be a simplistic characterization of what looks like an internal war inside the agency. The Washington Post has since reported that at some point, the Office of General Counsel (OGC) at FDA became involved and authorized the computer surveillance.
The revelations have rocked the FDA, which has struggled in its ambition to extend its regulatory oversight. It’s been under almost constant fire since FDA Commissioner Margaret Hamburg pledged three years ago to aggressively expand the FDA’s manufacturing standards. Most recently in June, the House Oversight Committee released an in-depth report, detailing a dramatic drop in the production of generic injectable drugs. Since the campaign began, production of generic injectables declined by 30 percent, contributing to a massive shortage.
Hamburg and Jeffrey Shuren, the director of the Center for Devices and Radiological Health, also have stumbled in a ham-handed attempt, announced two years ago, to expand regulatory control over the genetic testing industry, also regulated by CDRH. That decision was a dramatic reversal for the agency, which until then had refrained from regulating laboratory-developed tests (LDTs).
LDTs represent thousands of tests manufactured by many hundreds of laboratories, and that does not even include direct to consumer (DTC) identity and ancestry tests. According to venture capitalists, medical technology startups are going out of business as they await FDA guidance that may take years before they are announced, if they are ever instituted.
The latest FDA black eye highlights an internal battle inside an agency struggling to balance a commitment to science with an intensely politicized decision-making process. As Steve Usdin reports in a superb deconstruction of the scandal in BioCentury, the origin of the dispute seems to reflect internal conflicts that often occur in agencies with many employees holding differing scientific and ideological agendas.
Science is rarely black and white, Usdin notes, and there will always be sharp disagreements among staffers on any decision. The real issue, Usdin suggests, is the tension between whistleblowers who can play a critical role when there is genuine malfeasance and the possibility of junior scientists, motivated by ideology, money or both, trying to circumvent the legitimate responsibility of senior management to make final decisions.
“If junior staff can always go around senior management when there is disagreement, and successfully use politicians and the media to have decisions overturned, then junior staff are running the organization by default,” he writes.
The controversial Dr. Smith connection
The Times accounts ignored the key and controversial role played by Dr. Smith, the former CDRH reviewer, who has a long and notorious reputation for filing high profile medical device whistleblower suits. From 2006 to 2010, Dr. Smith was the point man for the dissident employees. In 2009, FDA officials took his public-safety concerns to the HHS Office of Inspector General. After two investigations, that office said that senior managers had followed FDA protocol in fairly evaluating the devices.
FDA officials subsequently brought in Kelly, Anderson & Associates Inc. to independently look further into Dr. Smith's allegations. In a shock to the complaining scientists, the consulting firm found in December 2010 that it was Dr. Smith who "appeared to meet the criteria of creating a hostile work environment,” according to a copy of its report.
While those investigations were unfolding, in December 2009, the five employees and Dr. Smith filed a secret lawsuit against 15 medical device manufacturers, claiming violations under the False Claims Act. According to documentation uncovered by the Wall Street Journal, Dr. Smith, is well known as a “serial whistleblower.” Twice previously, while working at hospitals overseen by Yale and Cornell, he had filed federal whistleblower complaints that are almost identical to the current complaint. Smith would have landed a multi-million dollar payday if they had been successful. Both cases were dismissed although Smith did collect hundreds of thousand dollars from both universities to settle claims that he had been unfairly retaliated against for standing up for public safety.
The current suit seeks fines of up to $11,000 for each time the government was billed for a procedure using one of the devices. Based on the fact the government conducts millions of CAD mammograms per year—just one of the devices in question—the plaintiffs, if victorious, could collect as much as 25% of what could be a multi-billion dollar settlement.
Most legal experts view the current suit as far-fetched, although, as with the prior Smith suits, such high profile cases sometimes end in settlements of what amount to nuisance claims because of the high cost of litigation. The size of their potential legal jackpot may well have been a motivation for the self-proclaimed whistleblowers, who had filed the suit before they were canned, and then upped the ante by breaking government confidentially restrictions and leaking their dissident conclusions to the Times in 2010.
The FDA is sharply constrained from discussing its side of the story while the former employees are having a field day, aided by a compliant Times, activist groups and even some Congressmen. None of this rich detail appeared in either of the two “investigatory” reports by the Times.
Almost incomprehensibly, from a journalistic ethics perspective, the Times bit on the story two years ago without offering any of the personal, political and financial context and conflicts, and then doubled down this summer. Rather than brave dissidents, who defied the corrupt agency to protect the public—the Times’ overly simplistic narrative reinforced again last month—the rebels may be no more than run of the mill ambulance chasers.
The coming genetic screening regulation scandal?
Regardless, the fiasco is one more black eye for the beleaguered FDA. As Steven Grossman of HPS Group has written in his influential blog, FDAMatters.com: “There is no way that FDA can look good if it is seen as approving devices that should not be on the market, squelching internal scientific disagreements, pursuing vendettas against its employees, or interfering with the prerogatives of Congress and the Office of Special Counsel. In the face of all of this—the allegations and FDA unwillingness or inability to respond fully–it is hard not to worry about the agency. It is an institution that badly needs public and congressional support to do its job, especially when its responsibilities are growing and its budget isn’t.”
It’s not yet clear what this public blowup will have on the CDRH’s plan to regulate laboratory genetic tests. Until the summer of 2012, the FDA considered LDTs low risk. It had not enforced applicable in vitro diagnostic regulations because the tests had been developed, validated, and offered within single laboratories, targeted rare diagnoses or conditions and had been used by physicians within an institution and for treating their patients.
But the Wild West of personal genomics, brought on by the introduction of inexpensive saliva-based tests, changed the calculus. The FDA pointed to instances of exaggerated claims, poor controls and fraud, although it’s not clear whether these problems represented marginal businesses that pop up when a new industry suddenly emerges or were endemic. The FDA assumed the worst, and urged far tighter regulations, much to worry of the wider science community, which believed an emboldened FDA would squelch innovation.
The abrupt change in direction by the FDA sparked an uproar, as scientists inside and outside the industry voiced concerns that the agency was too much of a blunderbuss to oversee such a “disruptive,” innovation driven and fast changing industry. Confirming their skepticism, Director Shuren testified at a summer 2010 Congressional hearing that direct-to-consumer (DTC) genetic testing companies did no independent research. That was flat out wrong, and was widely seen as the government’s attempt to support what he called “traditional manufacturers” at the expense of cutting edge newcomers, which would cripple startups.
Over the course of the next year, questions were raised about whether the FDA even had the right to regulate LDTs. The agency heard from petitioners both for and against its Draft Guidance.
Roche, which has a well established in vitro fertillization business and faces the threat that entrepreneurial companies could erode its market share, went so far as to redline the proposed guidance, which if adopted would present even higher barriers to entry for startups—a brazen attempt at promoting “regulatory capture.”
Meanwhile, criticism poured in from more independent stakeholders. A joint letter from the coalition of the American Medical Association, American College of Medical Genetics, American Congress of Obstetricians and Gynecologists, American Society for Reproductive Medicine and the College of American Pathologists scored the FDA, noting that the proposed rules could create a stranglehold on genetic testing by established companies that “could create a barrier to patient access to physician-recommended LDTs.” The costs of these unnecessary regulations, they wrote, are “not insignificant, and would likely be passed to the patient, creating yet another barrier to access.”
The current model, the coalition concluded, works well and doesn’t warrant an extensive federal redo. “By requiring all companion diagnostics to be FDA-approved/cleared,” the joint petitioners wrote, “FDA is effectively ignoring a large number of tests that may perform better than those it clears/approves, and is stifling the innovation that drives development of those tests.”
A host of other organizations were as adamant in their rebuke of the FDA’s recommendations. “The restrictive implementation of national standards as proposed in the guidance document will have significant untoward consequences for transplant centers and patients, both programmatic and financial, and are likely to limit access to life-saving procedures,” wrote a coalition of transplant groups in a joint letter, left unsigned, because the authors feared retribution.
The Association of Public Health Laboratories, which includes the Centers for Disease Control and Prevention, starkly warned that if the FDA guidelines were strictly enforced, public health laboratories would lose their ability to identify and provide laboratory-based surveillance for many common diseases such as measles, pertussis, West Nile virus, TB and various viruses.
Stung by the intense negative reaction, the agency has pretty much hid in the weeds since then and never actually flexed its regulatory muscle. Now in the wake of the surveillance scandal, no one is sure in what direction the staggering agency will go.
Regulatory cliff?
There are some signs that the FDA, along with other federal agencies, may be holding off on near term initiatives, postponing several multi-billion dollar regulations until after the November election. Republicans believe and some Democrats hope that there will be a dramatic expansion in regulation next year if President Obama wins re-election.
What should the FDA do in the medical devices area, which could be the centerpiece for medical innovation for decades to come? Many people in the field believe the government would be best served by regulating with a velvet glove.
“Many consumers are going to want to know this information, and you don’t need a hospital to obtain it,” said Dr. Eric Lander, president and director of the Broad Institute, a genomic research center affiliated with Harvard University and the Massachusetts Institute of Technology.
Some long time skeptics of personal genomic testing, such as James Evans, professor of genetics and medicine at the University of North Carolina, who had previously categorized DTC tests as “relatively useless” and “entertainment” have changed their tune. “I think we’ve now entered an era where these direct-to-consumer offerings are beginning to have real medical relevance,” he said.
Concerns abound about whether the FDA, obsessed by its own dysfunctional decision making process, and pushed by activists (and sloppy reporting by influential media, such as the Times) will overplay its hand. It’s not at all clear whether the agency fully grasps the dimensions of the DNA revolution.
Single Nucleotide Polymorphism (SNP) chip results are now commodities. With analytic software proliferating in the next few years the reality is that the market will probably begin to regulate itself. Services like Promethease, which is a tool to build a report based on your genomic information, will become common enough to provide people open source information. Other startups like openSNP, which allows anyone who has gotten a DTC test to publish their test results, offer anyone the chance to share their information with others. The government may just not be equipped to keep up with new developments in this field.
When viewed across the broad collection of things that our government oversees, it’s difficult to make a case that regulating direct-to-consumer genomic services or imposing new restrictions on LDTs is worth the time and expense and political heat. The concerns seem overheated; the results will not kill you. Far more potentially dangerous information can be Googled or searched on WebMD. And no regulation could stop anyone from looking overseas, to less restrictive shores, if they want to avail themselves of a LDT or DTC device or test.
It may yet take the November election to see in what direction the staggering FDA will head.
FOLLOW ME ON TWITTER
Jon Entine, founder of the Genetic Literacy Project, is Senior Fellow at the Center for Health & Risk Communication and the Statistical Assessment Service (STATS) at George Mason University. He has advised non-profit and private companies in the genetics industry, has held stock in such companies and is member of Health and Human Services Secretary Kathleen Sebelius’ Carrier Screening Task Force.
|
eecdf9775fa037e9efc6322e4a2ab69e | https://www.forbes.com/sites/jonentine/2012/09/13/the-politics-of-obesity-here-comes-the-ngo-media-class-action-bar-complex/ | The Politics of Obesity: Here Comes the NGO -- Media -- Class Action Bar Complex | The Politics of Obesity: Here Comes the NGO -- Media -- Class Action Bar Complex
Maybe we should blame Lucy. Evolution may have planted the genetic seeds of how we became high calorie junkies. Now politics is trying to redo what nature has wrought.
Three million years ago, Lucy—the partial skeleton of a young woman who has come to iconically represent our distant ancestor Australopithecus afarensis—roamed the fertile plains of North Africa. She survived mostly on fruits and seeds. With a brain roughly the size of a chimpanzee, Lucy was dimwitted and didn’t need a lot of calories to feed the high demands of advanced cogitation.
Flash forward to 2012. Our brains are more than three times as large. According to what’s known as the Expensive Tissue Hypothesis, large brains are high-energy consumers. Brain size and diet are closely correlated. Consistent with an adaptation to a high-quality diet, modern humans also evolved relatively small gastrointestinal tracts. The smarter we became, the more calories we needed. Humans with a genetic knack for storing fat would have had a Darwinian advantage.
In other words, As Elizabeth Kolbert quipped in her 2009 New Yorker essay on the growing obesity problem, “just as it is natural of gorillas to love leaves, it is natural for people to love funnel cakes.”
Modern humans consume all kinds of caloric-rich foods, and in great quantities. From a hard-wired perspective, the brain does not distinguish between Twinkies and a fat-laced steak. The human body, wrote, Michael Power and Jay Schulkin in The Evolution of Obesity, is “mismatched” to what’s available in today’s grocery stores. “We evolved on the savannahs of Africa. We now live in Candyland.”
Yes, we have a fat problem. The United States Centers for Disease Control and Prevention estimates that 34% of adult Americans and almost 20% of pre-adolescents are overweight. But that’s hardly an American phenomenon. Studies show that the English, Greeks, Norwegians, Poles and other Western populations are almost as rotund, although their obesity levels are slightly lower. According to the World Health Organization, more than 1 billion people in the world are considered overweight, about 14 percent of the global population.
So, how should we address our compulsion to consume? We could educate ourselves to the need to harness our programmed desire to eat Kentucky Fried Chicken, drink Coke and splurge on funnel cake. The catchphrase here is “energy balance,” which states, simply, that ‘energy in’ should equal ‘energy out’. Education could be tied to self-regulation, in which food manufacturers are asked to voluntarily reduce exposure to children of certain foods, with the commitments independently monitored. There are already signs that that strategy is working.
We could legislate. In New York City, for example, the New York City Board of Health has adopted Mayor Michael Bloomberg's proposal to ban supersized sodas and sugary drinks, which has raised the hackles of left and right leaning libertarians everywhere. And school district’s across the country are rushing to ban ‘junk’ food and drinks on their premises.
We could tax. A growing number of European countries, including Denmark, Hungary, Finland and France have imposed taxes on what they consider unhealthy foods, from butter to cupcakes to soda. But not all foods high in fat or carbohydrates are unhealthy, which challenges the inclination to impose blanket taxes. Even indulgent foods eaten in moderation—think Ben & Jerry’s ice cream, Dr. Pepper and late night nachos—are perfectly reasonable choices in a varied diet.
Food as drugs
Or, as is beginning to unfold in tort happy America, we could just sue. In the past few months alone, more than a dozen lawyers who previously had hit the class action lawsuit jackpot by suing tobacco companies have turned their sights on global food producers, restaurants and even grocery stores chains in hopes of yet another mega payday. They’ve filed more than 25 cases against international food companies, including ConAgra, PepsiCo, Heinz, General Mills and even yoghurt manufacturers.
To the ostriches in the food industry, beware: You are in the crosshairs of the NGO-Media-Class Action Bar Complex. What does that mean for your company and public health responsibilities?
The growing food technology sector, which encompasses processors as varied as packaged salad makers, fresh frozen seafood distributors, cereal producers, ice cream manufacturers and snack and soda companies, fancies itself as on the cutting edge of science. For example, the website for the Ohio-based Center for Innovative Food Technologies brags about its “expert network of food scientists [who] offer a full range of food safety services to food processors through microbiological consulting and testing, food safety auditing, and food safety and quality training.”
One problem: While food technologists see themselves as innovators on behalf of the human palate, non-governmental organizations (NGOs) and much of the media are reflexively anti-technology. To most journalists and self-designated public interest groups, the food industry isn’t an innovator but the enemy. They glibly caricature the processed food sector as Big Food, and that’s not meant as a compliment. Think “Big Tobacco.”
With the publication of The End of Overeating, David Kessler, a former commissioner of the US Food and Drug Administrator during the Clinton Administration, has emerged as a guru of sorts for the anti-Big Food movement. He goes so far as to characterize food manufacturers as drug pushers. He invokes the popular industry term “eatertainment,” used derisively to mock what he says is the industry’s desire to create ‘designer foods’ made from processed, highly caloric combinations.
Kessler invented his own term—“conditioned hypereating”—to describe the narcotic like effect he claims ‘fashion foods’ have on ‘innocent’ and ‘helpless’ victims: food consumers. He likens overeating to “compulsive gambling and substance abuse” and pushes a beguiling, but superficial and ultimately silly argument as to how to confront Big Food: “substitute healthier rewarding food.” So, why should I give up my beloved ice cream soda?
Super sized litigation
The real import of Kessler’s bestseller was to re-energize the coalition of anti-processed food advocacy media, NGOs and class action lawyers. The first serious public relations and legal challenge to the food industry dates to the late 1970s and extended to the 1990s. Plaintiffs blamed processed foods for their obesity, claiming that fast foods were inherently unhealthy and even dangerous. For the most part, the suits didn’t gain much traction. Judges almost always dismissed them, concluding that most reasonable people would not confuse burger joints with natural food restaurants, and therefore could not be blamed for their own poor health.
A second wave of activism, which began in the 1980s and targeted alleged false advertising, culminated with Pelman v McDonald's, a class action filed by New York parents on behalf of their teenage girls. The plaintiffs claimed they were helpless in the face of McDonald’s slick advertising, which disguised the poor nutrition of burgers, fries and Cokes, turning them from healthy eaters into fast food “heavy users”. Essentially, plaintiffs argued that because the fat content of food was not conspicuous—even after companies displayed caloric content—they were misled.
It was a frivolous argument, flunking the causation test, and even the liberal media and comedians made hash of such suits. But the case dragged on, which played into the hands of the ambulance chasers. It made it through to the US Court of Appeals before it was thrown out. But McDonald’s ultimately settled when faced with the likelihood of an appeal and tens of millions of dollars in further litigation expenses.
A third wave of litigation, beginning in the early 2000s, focused on consumer fraud laws. Suits claimed food manufacturers and restaurants misstated the fat or calorie content of processed foods. For example, cereal makers were targeted for allegedly misrepresenting the amount of nutrients or for marketing low-sugar versions of children's cereals as a healthy breakfast alternative, although their ‘new and improved’ cereals contained nearly as much calories as the original version.
The discourse took on a more confrontational tone in 2003 after the Surgeon General’s report on obesity, which advocated community-based action. The renewed public debate was inflamed by the release of a slew of documentaries and books (Super Size Me, Fast Food Nation, Fat Land, Omnivore’s Dilemma, Food, Inc.) that demonized processed foods and glorified “natural” and “organic” alternatives.
Now we are entering wave four, with the Kessler thesis front and center. The new popular narrative accuses ‘Big Food’ of conspiring to create food addicts who crave processed products that are fun to eat but nutritionally deficient. That wide net provides plenty of deep pockets that plaintiffs can target, from processors to food marketers to grocery chains and insurance companies, and even to states and the federal government, which could be on the legal hook if a wayward court should decide that fast foods represent “crimes against the people.” Welcome to the NGO-Media-Class Action Bar Complex.
Activist lawyers and NGOs claim their goal is to educate and empower the public. They do have an argument. From a public health standpoint, even court victories are not needed to spark public debate and prod truculent companies into writing clearer labels and introducing healthier alternatives. But make no mistake: this coalition is not focused on health; it's classic jackpot litigation.
Foods marketed as “healthy” or “natural”—unregulated claims—are especial targets. California, with its pro-consumer courts, is the venue of choice. Suits have been filled in recent months against numerous companies, including Swiss Miss cocoa and Hunt’s canned tomatoes, claiming these products contain ingredients designed to hook consumers rather than just feed them.
“It’s a crime, and that makes it a crime to sell it,” said Don Barrett, a Mississippi lawyer, who has made millions of dollars suing tobacco companies on behalf of states, which had spent hundreds of millions of taxpayer dollars caring for sick smokers. Among Barrett’s new targets is Greek yogurt maker Chobani because it lists “evaporated cane juice” instead of sugar as one of the product ingredients.
“Food companies will argue that these are harmless crimes,” he said. “The tobacco companies said the same thing. But to diabetics and some other people, sugar is just as deadly as poison.”
Last year, Ferrero, the maker of Nutella, was sued for allegedly implying that its spread was part of a healthy diet “It’s difficult to take some of these claims seriously, for instance, that consumers was deceived into believing that a chocolate hazelnut spread was healthy for children," said Kristen Polovoy, an industry lawyer at Montgomery, McCraceken. But that’s the argument proffered by defense lawyers when Big Tobacco first went on trial.
The class action even brought ridicule from the normally compliant media. "Here's a suggestion for the thousands of other litigious California mothers: Try a little responsible parenting. Try reading the labels and understanding what they mean," read a snarky blog at LA Weekly.
Yet faced with the unpredictability of the California courts, Ferrero settled the nuisance suit for more than $3 million. This expanding partnership of activist lawyers and NGOs, often encouraged by media broadly sympathetic to an anti-processed food perspective, might yet recreate the legal Godzilla that wrecked havoc on the tobacco industry, even if the comparisons with ‘Big Food’ are invalid.
“People in the west are both more affluent and growing fatter, and that presents a perfect marriage of money and opportunity,” Robert Blood, the founder and managing director of SIGWATCH, a London and Freiburg based consultancy, said in an interview.
“Companies are drowning in issues,” Blood noted. “Activists take issues that are quietly bubbling in the background and suddenly bring them to the fore. We see signs that that is happening to the food industry.”
SIGWATCH tracks NGO to help companies understand how advocacy groups drive policy issues, and publishes regular news digests. In its report through summer 2012, attacks on the food and agriculture industries ranked just below energy as the favorite target of hostile NGOs. Monsanto, Cargill, Syngenta and other chemical and biotech companies are particularly vulnerable to web-based demonization campaigns, the report noted. NGOs have also continued their aggressive attacks against a host of food, beverage and grocery companies, including Nestle, McDonald’s, KFC, Coca-Cola, Pepsi, Unilever, Tesco and Sainsbury’s. Tesco and Sainsbury’s also received some praise from these NGOs for capitulating to some demands, particularly as regards labeling of foods with biotech ingredients.
“NGO activism is like a business,” Blood added. “Not because it earns profits for NGOs as class actions do for lawyers, but because savvy, high profile campaigns, especially if they are seen to humble big business, boost paying memberships and make NGO’s a magnet for foundation grants.”
The science behind regulating processed foods
With a compliant media only to eager to assign almost total responsibility for obesity to corporations, communities all over the world are banning the sale of sweets, salty snacks and sugary beverages in public schools. Yet, it may be surprising to learn that the scientific evidence linking obesity to ‘junk’ food consumption in schools is thin. A recent study by two Penn State professors followed nearly 20,000 students from kindergarten on, beginning in 1998. They recorded the students’ BMI (body mass index) at different grade levels and correlated the data with the availability of junk food at their school before and after bans were put in place.
The scientists evaluated eighth graders who moved into schools that sold junk food with those who transferred to schools that did not, and children who never attended a school that sold snacks. And they compared children who always attended schools with snacks with those who moved out of such schools. No matter how the researchers crunched the data, they could find no correlation at all between obesity and attending a school where sweets and salty snacks were available. Their conclusion? “Food preferences are established early in life,” said Jennifer Van Hook, the lead author and a professor of sociology and demography at PSU. “This problem of childhood obesity cannot be placed solely in the hands of schools.”
Obesity certainly raises legitimate public policy and health issues. How best to respond is a far pricklier question. At what point should individuals be required to take responsibility for his or her eating habits? If obesity is caused, primarily, by our genetics, should food manufacturers be forced to assume the associated health costs when someone gains weight?
Just as important, should we rely on legal and regulatory coercion to address this problem? Some legal experts argue that using the courts to set “obesity policy” borders on judicial paternalism. Berkeley law professors Stephen Sugarman and Nirirt Sandman point to the consequences of using the courts and regulatory system to address childhood obesity.
“Some people think the solution lies in using tort law to sue McDonald's, Coca-Cola, and other corporations,” they wrote in the Duke University Law Journal. “We reject that notion. Others believe that government should order specific changes in the behavior of food companies and school officials—and yet, there is little reason for confidence that these "command and control" strategies will make a difference. Instead, we propose "performance-based regulation” of the food industry. … Schools are not told how to achieve better educational results, but better outcomes are demanded of them.”
While the litigation/regulation model has come to dominate the obesity debate, the industry has been moving on its own to address public concerns, with some success. In recent years, fast food companies have added many more nutritious alternatives and become leaders in the movement for ‘food transparency,' and the largest companies have voluntarily reduced food ads on TV aimed at young children.
According to a newly released study in the American Journal of Public Health, economists monitoring the beverage industry’s promise to self-regulate and get sodas and other sugary drinks out of schools found that companies shipped 90% fewer calories to schools in 2010, compared with 2004, and reduced shipments of full-calorie sodas by 97%.
That’s the result of a pledge, called the School Beverage Guidelines, signed in 2006 by major beverage companies, former President Bill Clinton and health advocacy groups. It is enforced through independent monitoring, outlined calorie content and serving sizes. Robert Wescott, president of Keybridge Research, a private economic research firm in Washington, that was selected to independently monitor the effort, said the landscape has shifted dramatically and that industry self-regulation can work.
“Is it possible that somewhere in America there’s a Coke machine in a hallway where I can put a buck in and get a Coke? Absolutely,” Wescott said. But he added that there has been no “backsliding” by food companies since 2010. “In the main, yes,” he said, the industry is doing would it committed to do.
Under the Healthy Hunger-Free Kids Act, the US Department of Agriculture is preparing updated nutritional standards for foods and drinks that can be sold at schools. It is expected to target sports drinks, milk that is not fat-free and flavored milk. To what degree they will depend upon self-regulation—which in this case, appears to be working—is unclear, but NGOs and the tort machine are clearly pushing for a far more aggressive approach.
The precautionary media and industry response strategies
For the coalition of NGOs and Big Tobacco lawyers to work their litigation magic, they will need at least the tacit cooperation of journalists and bloggers. They have learned that the Wild Wild West of the web can greatly magnify their voices.
Journalists, who by and large lean left, can be paternalistic when it comes to reporting about food. They are not particularly sophisticated on many science issues, have a woeful understanding of risk and cost-benefit trade-offs and are overly influenced by NGO campaigns. They are easily lobbied by advocacy groups with “science sounding” names, such as the Center for Science in the Public Interest, Union of Concerned Scientists, Environmental Working Group, Natural Resources Defense Council and the like, which are dominated by staffers with minimal science backgrounds, a deep seated antagonism to risk and cost/benefit analysis and a fossilized anti-industry ideology.
There are intriguing ‘class’ issues in play, as well. Reporters are typically middle class or affluent and control their eating. They view obesity as a lower class problem and see themselves as altruistic for promoting the simplistic storyline that foods that are not ‘natural’ (whatever that means) are automatically better for you. The anti-Big Food narrative also conforms to the media mindset that every story must have a villain, the bigger and more pernicious the better.
Most journalists also reflexively embrace the precautionary principle although the science community, by and large, does not. ‘Better safe than sorry’ may sound like a prescription for moderation, but taken to the extreme, if in place, the food industry could be required to prove that every new product is “healthy” even before it can be introduced. Its invocation would provide crusading NGOs, bloggers and activist journalists with an unchecked justification for rejecting almost any processed food, especially those made using genetic modification or where trace chemicals can be found in food packaging.
How should the food industry respond when confronted by skeptical or even hostile journalists? Many food companies have convinced themselves they can “turn” a story their way by providing as interview subjects industry scientists who offer context and rebuttals, backed by solid research. If only it was that easy. Consumers, and the media who influence them, are often resistant to industry experts no matter how impressive their credentials or apparently air tight their research.
Journalists seeking balancing perspectives sometimes do not even interview corporate scientists, or if they do, they relegate their comments to the back end of their piece to signal to the reader that they do not take industry views as highly credible. Fairly or not, most journalists, when seeking out “independent” scientists default to NGO or university researchers—even though their credentials are often lackluster and their views are shaped in academic environments hostile to business.
Rather than being reactive, the food industry needs to become vigilant and proactive. “Keep an eye first on Europe, and then on California,” advised SIGWATCH’s Blood.” Those regions are very precaution minded, and deeply wedded to the not very scientific belief that 'craft' foods—organics and the slow food movement, for example—are always healthy, authentic and good, while 'man-made' foods—anything that that comes in a can, package or packet—is automatically unhealthy, artificial and bad."
Food manufacturers have themselves to blame, in part, for jumping onto the simplistic ‘natural is better’ bandwagon. By using expressions like “natural”, “simple” and “raw” in product advertising and labeling, they not only have attracted scrutiny, they imply that foods that don't use such descriptions—which happens to be the majority of their product lines—are unnatural, processed and artificial. No wonder the public now equates processed foods with unhealthy. There is an especially exquisite irony here, since in the post-war period, at least until the 1980s, food firms were very happy to develop and promote, on its own terms, novelty, from new tastes to greater convenience, and consumers loved it and bought these innovative products.
Being passive in the face of the burgeoning Media-NGO-Class Action Bar Complex is simply no longer an option. Anti-science and anti-industry views often become the template from which decisions are made by consumers—but also by regulators who often respond not to the facts on the ground but to perceptions in the air. The food industry needs to get out in front on questions about its credibility and labeling transparency. Defensiveness is the absolutely worst strategy. When one becomes apologetic, the friendly media morphs into piranhas plying blood-tinged waters. Corporations and trade groups shouldn’t be shrill, but they need to forthrightly tell their side of the story.
Most critically, the food industry needs to respond more cohesively. It cannot allow campaigning activists to divide manufacturers or restaurants or food chains into opposing camps comprised of those who make “good foods” versus others who trade in “bad foods.” Those designations are fungible. Just ask the lean, finely textured beef (LFTB) industry, which had long been praised for turning out a healthy, sustainable product—and is now on the edge of extinction because of the “pink slime” branding fiasco.
Major producers may lose the echo-chamber debate in the short run, but convincing the public about food safety and the importance and healthy qualities of most processed foods is an endless marathon, not a sprint. Food choice is an important value to embrace and promote. Aggressively make your voice known, in public forums and on the web. Make sure you and your allies present credible, science-based information. Do this relentlessly and the industry will not only survive but also thrive.
FOLLOW JON ON TWITTER
Jon Entine, senior fellow at the Center for Risk & Health Communication and the Statistical Assessment Service (STATS) at George Mason University, is executive director of ESG MediaMetrics, a sustainability consultancy.
|
0711e927cfbf20669801f9832e7869af | https://www.forbes.com/sites/jonentine/2012/09/30/does-the-seralini-corn-study-fiasco-mark-a-turning-point-in-the-debate-over-gm-food/ | Does the Seralini Corn Study Fiasco Mark a Turning Point in the Debate Over GM Food? | Does the Seralini Corn Study Fiasco Mark a Turning Point in the Debate Over GM Food?
Are anti-biotech campaigners the leftwing version of climate change deniers? The science media are finally confronting the distortions perpetrated by anti-GM advocacy groups and illiberal “progressive” journalists and bloggers. Jon Entine, executive director of the Genetic Literacy Project, reports.
The fallout continues to escalate over the questionable maize study by Gilles-Eric Seralini released almost two weeks ago. It’s already being referred to as The Seralini Affair or Seralini Tumor-Gate.
The notorious French molecular biologist known for his history of anti-biotechnology activism and scientifically disputed research claimed that rats fed a high dose lifetime diet of Monsanto ’s genetically modified corn or exposed to its top-selling weed killer Roundup suffered tumors and multiple organ damage.
The study sparked an immediate furor among independent scientists, including those who support the labeling of GM foods but found Seralini’s research sloppy and poorly documented. Scientists have often responded forcefully after the release of poorly constructed studies. What’s unusual this time is that science journalists, who traditionally have given activist scientists and NGOs a free pass when they circulated questionable science about GM crops and food, are up in arms as well.
Geneticists and the general science community were first out of the block with their criticism, pointing out more than a dozen problems with the study. The London-based Science Media Centre, which assists reporters when major science news breaks, posted an entire page of criticisms, most notably its poor design, the use of tumor prone rodents, the small sample size and the selective presentation of data. MIT’s Knight Science Journalism Tracker documented a slew of problems.
Seralini’s research is anomalous. Previous peer-reviewed rat feeding studies using the same products (NK603 and Roundup) have not found any negative food safety impacts. The Japanese Department of Environmental Health and Toxicology released a 52-week feeding study of GM soybeans in 2007, finding “no apparent adverse effect in rats.” Earlier this year, a team of scientists at the University of Nottingham School of Biosciences released a review of 12 long-term studies (up to two years) and 12 multi-generational studies (up to 5 generations) of GM foods, concluding there is no evidence of health hazards.
The latest and most surprising pushback is the outrage coming from journalists at responsible news organizations who generally have been loathe to criticize reporters from “progressive” NGOs and suspect activist media sites (e.g. Mother Jones, Grist and ninety percent of the contributors at Huffington Post)—perhaps because they are aligned ideologically on many other issues.
The most dramatic break came in a comprehensive deconstruction of the research fiasco by Keith Kloor at Slate magazine headlined, “GMO Opponents Are the Climate Skeptics of the Left.” As Kloor noted, although Seralini’s research was immediately and almost universally panned by serious scientists, the anti-biotech NGO-media complex went into over-drive upon its release, promoting it as game-changing research that raised fundamental questions about the safety of GM crops and food. Here’s the process, as Kloor described it:
“… [F]ears are stoked by prominent environmental groups, supposed food-safety watchdogs, and influential food columnists; that dodgy science is laundered by well-respected scholars and propaganda is treated credulously by legendary journalists; and that progressive media outlets, which often decry the scurrilous rhetoric that warps the climate debate, serve up a comparable agitprop when it comes to GMOs,” Kloor wrote. “In short, I’ve learned that the emotionally charged, politicized discourse on GMOs is mired in the kind of fever swamps that have polluted climate science beyond recognition.”
Kloor’s analysis followed on the heals of scathing articles by non-ideological science-focused journalists and watchdog groups in the United States and Europe that pointed out how NGOs and advocacy groups conspired before the release of the report to try to rig the public debate. Seralini and his colleagues required that reporters who wanted pre-release access to the study sign a non-disclosure agreement that barred them from seeking input from outside sources including other scientists—which, remarkably, many prominent news organizations agreed to.
The unheard of restrictions meant that initial reports on the study—including at such places at Reuters—gave Seralini’s highly questionable findings an uncritical free pass. As a consequence, as the researcher Scicurious pointed out in a blog at Discover’s The Crux, the initial reporting gave legitimacy to questionable conclusions, playing into the anti-GMO narrative, rather than putting the study in full context or calling attention to the concerns expressed by mainstream researchers about the validity of the results.
Within minutes of the release of the study—well before mainstream scientists had even had a chance to review it and offer a more balanced perspective—what appeared to be a coordinated response narrative by anti-GM groups surfaced in the US and Europe. The always-balanced Andrew Revkin, who writes The New York Times’ Dot Earth blog, noted how anti-GMO groups immediately started promoting Seralini’s study in an attempt to influence the upcoming vote in California over whether mandatory labeling should be required of GM foods.
Almost concurrently with the release of the study, Yes on Proposition 37 released a statement saying that the findings “underscore the importance of giving California families the right to know whether our food is genetically engineered and to decide for ourselves whether we want to gamble with our health by eating GMO foods.” Gary Ruskin, the advocacy group’s campaign manager, claimed, wrongly, that Seralini’s research was the “first ever long-term study” of GM foods—a blatant falsehood repeated by anti-GM NGOs.
Reliably anti-GM journalists piled on. The Guardian’s John Vidal, known for his reflexive embrace of even the most strident anti-GM claims, became a embarrassing mouthpiece for Seralini, who has steadfastly refused to share his raw data with independent scientists.
Kloor took welcome potshots at some of the biggest purveyors of misinformation, most notably Tom Philpott, the popular food blogger at Mother Jones. After the study’s release, the newly-minted genetics expert bizarrely wrote that Seralini's results “shine a harsh light on the ag-biotech industry’s mantra that GMOs have indisputably proven safe to eat."
“This brand of fear-mongering,” Kloor wrote, is what I've come to expect from environmental groups, anti-GMO activists, and their most shamelessly exploitive soul travelers.”
Philpott has a notorious reputation among scientists and serious journalists as a precautionary junkie, reflexively renouncing almost anything opposed by the Natural Resources Defense Council, the Sierra Club, Union of Concerned Scientists, Greenpeace or the Environmental Working Group—NGOs not know for their balanced analysis of science issues. Philpott, who was recruited to Mother Jones from Grist magazine (which also botched the Seralini coverage), often cherry picks anti-GM or anti-chemical food related studies, reporting them out of context and with no attempt at balance.
What’s next?
What some are calling a “retraction watch” is now underway. This week could be telling. The European Food Safety Authority (EFSA) is expected to deliver its preliminary review of the study, although Seralini has refused the agency’s request to release his raw data (a standard practice that allows other scientists to attempt to replicate findings and assess how to proceed with future research). The EFSA—traditionally no friend to the biotech industry—has previously criticized the quality of Seralini’s research, as has Europe’s Public Research and Regulation Initiative (PRRI).
Chemical Toxicology Journal, which published the peer reviewed Seralini paper, has been deluged by written requests from academicians and scientists to reassess its handling of the review process. Editor Wally Hayes has reportedly indicated that he is considering taking action, perhaps including retracting the paper.
Whatever the short term fallout, scientists and serious journalists believe this fiasco might yet prove to be a watershed for how journalists cover the food and crop biotechnology revolution. The Seralini Affair is helping to draw a sharp line between anti-innovation campaigners among NGOs and the media and more mainstream scientists and journalists, who have strived for balance in reporting on an emerging and scientifically complicated technology. Science may yet prevail over ideology.
Follow Jon on Twitter
Check out the Genetic Literacy Project
Jon Entine, founding director of the Genetic Literacy Project, is senior fellow at the Center for Health & Risk Communication at George Mason University, and a senior fellow at the Statistical Assessment Service (STATS).
|
b17c090162e897cd674a572920a6e967 | https://www.forbes.com/sites/jonentine/2012/11/29/is-2013-a-watershed-year-for-the-anti-obesity-movement/ | Is 2013 a Watershed Year for the Anti-Obesity Movement? | Is 2013 a Watershed Year for the Anti-Obesity Movement?
A number of trends are coming together to suggest that obesity restrictions—advertising curbs, taxes and bans—will be front burner issues in 2013.
The facts are well known. The average American consumes too many calories and exercises too little. We are one of the fattest nations on earth and the rise in childhood obesity will be a major factor in diabetes, heart disease and other illness in decades to come.
The food industry is one of the largest and most powerful industries in the world. So, it’s no wonder that the debate over what children eat and drink, and what should or can be done about it, has escalated into a political battle. It’s the culmination of a long-term trend.
The growing food technology sector, which encompasses processors as varied as packaged salad makers, fresh frozen seafood distributors, cereal producers, ice cream manufacturers and snack and soda companies, fancies itself as on the cutting edge of science.
But while food technologists see themselves as innovators on behalf of the human palate, to many journalists and public interest groups, the food industry isn’t an innovator but the enemy. They caricature the processed food sector as Big Food, and that’s not meant as a compliment. Think “Big Tobacco.”
For years, tort lawyers, egged on by non-profit (NGO) advocacy groups, blamed processed foods for their client’s obesity, claiming that fast foods were inherently unhealthy. For the most part, the suits didn’t gain much traction. Judges almost always dismissed them, concluding that most reasonable people would not confuse burger joints with natural food restaurants, and therefore they could not blame their ill health and poor food selections on anyone but themselves.
The discourse took on a more confrontational tone in 2003 after the Surgeon General’s report on obesity, which advocated community-based action. The renewed public debate was inflamed by the release of a slew of documentaries and books (Super Size Me, Fast Food Nation, Fat Land, Omnivore’s Dilemma, Food, Inc.) which demonized processed foods and glorified “natural” and “organic” alternatives.
Food addiction?
The current popular narrative pushed hard by foodie activists and advocacy groups accuses ‘Big Food’ of conspiring to create food addicts who crave processed products that are fun to eat but nutritionally deficient. The public is portrayed as innocent and helpless victims, and eating is likened to “compulsive gambling and substance abuse.” The food industry, which offers a stunning array of selections from natural to processed and from abundant to boutique, is caricatured as an ‘enemy of the people'.
That wide canvas provides plenty of deep pockets for NGOs and tort lawyers to target, from processors to food marketers to grocery chains and insurance companies, and even to the states and the federal government, which could be on the legal hook if a wayward court should decide that fast foods represent “crimes against the people.”
What I call the NGO-Media-Class Action Bar Complex began to coalesce in 2009, which led to the creation of the Interagency Working Group on Foods Marketed to Children. Under intense pressure from activist groups, Congress authorized a task force to evaluate food advertising and suggest possible restrictions for children 2 to 17.
Their recommendations were both comprehensive and radical. Foods that were advertised would have to meet strict dietary guidelines, which would require the reformulation of thousands of products, including 88 of the top 100 most popular foods. For example, Honey Nut Cheerios, Cheez-It, Animal Crackers, peanut-butter-and-jelly sandwiches and even bottled water couldn’t be marketed or advertised to children as they are now because they do not contribute “significantly” to nutritional requirements.
The food industry pushed backed. There were concerns about limiting commercial free speech, but also more pragmatic considerations, particularly the enormous cost of reformulations—and in some cases the necessary junking of major brands whose unique taste profile could not be preserved—set off against the debatable benefits.
While consumers may intuitively believe that advertising to children and obesity are joined at the hip, the most comprehensive studies addressing that question are far from definitive. The Institute of Medicine has found strong evidence that TV watching is associated with child obesity. But researchers have found no proof that obesity is directly caused by ads for sweets or junk food.
“Despite media claims to the contrary, “writes David Ashton, a cardiovascular epidemiologist at the Imperial College School of Medicine in London, “there is no good evidence that advertising has a substantial influence on children's food consumption and, consequently, no reason to believe that a complete ban on advertising would have any useful impact on childhood obesity rates.”
The restrictions would also not affect the private label brands—products sold at Kroger’s, Safeway and the like at lower prices but with no advertising.
What will the Obama Administration do?
The IAWG has been digesting public comments since it released its recommendations last April. The panel’s final recommendations are required by legislation to be reviewed by OIRA (Office of Information and Regulatory Affairs) to ensure that any restrictions on commercial free speech would advance the public interest. But that legislative mandate expires at the end of March, which thrills many food activists who are afraid that cost-benefit analysis might lead to a watering down or scuttling of the recommendations. If this requirement is not extended, Congress could impose these controversial guidelines, setting up a Battle Royale.
Also in the wings is the fate of the Healthy Hunger-Free Kids Ac. The US Department of Agriculture is preparing updated nutritional standards for foods and drinks that can be sold at schools. It is expected to target sports drinks, milk that is not fat-free and flavored milk. To what degree they will depend upon self-regulation—which in this case, appears to be working—is unclear, but NGOs and the tort machine are clearly pushing for a far more aggressive approach.
The White House, or rather the First, Lady, has so far taken a cooperative rather than confrontational approach on this issue. Two years ago, Michelle Obama launched her signature “Let’s Move” campaign, telling a grocery trade group that food manufacturers needed to “step it up” to protect children. “We need you not just to tweak around the edges but to entirely rethink the products that you're offering, the information that you provide about these products and how you market those products to our children.”
The campaign has focused on encouraging healthier foods, a balanced diet and exercise, which are the solutions favored by industry and sometimes mocked by activists group as toothless. There are conflicting reports about whether the administration might move from jawboning to a more activist regulatory stance.
Can self-regulation work?
But there is no question that the more cooperative strategy has had impact. Whether out of fear of regulation or in recognition of marketing opportunities, the industry is responding to public concerns. In recent years, they’ve added many more nutritious alternatives and become leaders in the movement for ‘food transparency,’ and the largest companies have voluntarily reduced food ads on TV aimed at young children.
The switch to lower calorie drinks accelerated significantly after the signing in 2006 of the School Beverage Guidelines, drafted in a joint effort by major beverage companies, former President Bill Clinton and health advocacy groups. Enforced through independent monitoring, it recommends calorie content and serving sizes.
According to a newly released study in the American Journal of Public Health, economists monitoring the beverage industry’s promise to self-regulate and get sodas and other sugary drinks out of schools found that companies shipped 90% fewer calories to schools in 2010, compared with 2004, and reduced shipments of full-calorie sodas by 97%. Coca Cola illustrates the dramatic changes in its product mix. Thirty years ago, only 3% of its sodas were zero calories; now it’s 60%.
“It’s made a big difference,” said Elaine D. Kolish, the initiative director and a former head of enforcement at the Federal Trade Commission. More than 100 products have been changed or created to cut salt, fat, sugar or calories, she said. Tougher self-regulation is promised by 2014.
Robert Wescott, president of Keybridge Research, a private economic research firm in Washington that was selected to independently monitor the effort said the landscape has shifted dramatically and that industry self-regulation can work.
“Is it possible that somewhere in America there’s a Coke machine in a hallway where I can put a buck in and get a Coke? Absolutely,” Wescott said. But he added that there has been no “backsliding” by food companies since 2010. “In the main, yes,” he said, the industry is doing would it committed to do.
The catchphrase behind these industry initiated voluntary efforts is “energy balance,” which means, simply, that ‘energy in’ should equal ‘energy out’. Proponents believe regulating behavior as instinctual as eating is fruitless, encouraging self-education tied to self-regulation. Food manufacturers are asked to voluntarily reduce exposure to children of certain foods, with the commitments independently monitored.
Some measure of whether these voluntary efforts are real or just hype will come clear by mid year when the Health Weight Commitment Foundation releases its first report on its pledge to reduce annual calories by 1.5 trillion by the end of 2015, and sustain that level. The HWCF is a voluntary group of 16 of the country’s best-known brands representing 25% of the market share in the food and beverage industry. Its efforts are being independently vetted by the Robert Woods Johnson Foundation.
Public rejects "fat taxes"
Activists are unlikely to be content with measured actions even though more interventionist policies have often proven unworkable or even counterproductive, damaging the broader public health agenda in the process. The battle of food taxes highlights that tension. In New York City, for example, the Board of Health adopted Mayor Michael Bloomberg’s proposal to ban supersized sodas and sugary drinks. A growing number of European countries, including Denmark, Hungary, Finland and France have imposed taxes on what they consider unhealthy foods, from butter to cupcakes to soda.
Although taxes of so-called fatty foots appeals to some politicians as a stealth way to raise government revenue in a struggling economy, there are some signs of pushback from a public weary of the tax-regulation-litigation cycle. In November, voters in two California cities defeated ballot initiatives that would have imposed taxes on sugar-sweetened beverages. While these types of taxes are nicknamed “soda taxes,” the initiatives would have applied to all non-alcoholic beverages with caloric sweeteners, including flavored milk, milk drinks and drinkable yogurt.
The tax and regulate movement has hit headwinds even in supposedly foodie-obsessed Europe. Last year, Denmark instituted a tax, which reaches as high as 9 percent for some products, and affected a wide variety of popular foods, including butter, cheese and cream—anything containing more than 23% saturated fat. It set off an outcry among small businesses and consumers, which led to the recent Danish government decision to terminate the tax, effective January 1. Danish lawmakers also decided against creating a similar “sugar tax” that would have hiked the cost of products with higher levels of sugar.
Why the populist revolt in Denmark and elsewhere? Activists blame industry lobbying. That’s certainly played a role considering the deep pockets of the food industry. But there is no question that anti-tax campaigns resonated with the general public wary of government intervention in so personal a matter as food choices.
Echoing the public discussion in the United States, the Danish tax was aimed to encourage a change in diets. There is evidence the law did affect buying habits, but not in the way activists or researchers had predicted. Many Danes switched instead to lower-cost house brands but not healthier alternatives, and others crossed borders into Germany or other countries with lower food taxes to purchase their favorite products.
Obesity certainly raises legitimate public policy and health issues. How best to respond is a far pricklier question. Should individuals be required under the force of law to modify his or her eating habits because of the associated public costs, such as a spike in weight-related diseases? What’s next in the United States?
Over the past two years more than two dozen states and seven cities that have considered “soda taxes” to discourage consumption of sugary drinks has seen the efforts defeated or dropped because of public ambivalence. All signs point to renewed activism in 2013. That may not necessarily be good news for a country trying to slim its collective waist.
Follow Jon on Twitter
Check out the Genetic Literacy Project
Jon Entine, founding director of the Genetic Literacy Project, is senior fellow at the Center for Health & Risk Communication and at STATS, at George Mason University.
|
1a08611e45d90a856d7e9e0ed6e1e3c3 | https://www.forbes.com/sites/jonentine/2012/12/06/in-reversal-bedrock-studies-linking-bisphenol-a-bpa-to-heart-disease-challenged/ | In Reversal, Bedrock Studies Linking Bisphenol A (BPA) to Heart Disease Challenged | In Reversal, Bedrock Studies Linking Bisphenol A (BPA) to Heart Disease Challenged
Studies supposedly linking the plastic additive to diabetes, heart disease and coronary artery disease have been called a “bombshell” by anti-BPA NGOs and many journalists. Now those conclusions, and a central contention of campaigners, is in doubt.
The most explosive claim of anti-BPA campaigners—that the plastic additive BPA causes an array of heart-related diseases—is in question, according to a peer reviewed paper on the science website PLOS One.
Environmental health scientist Judy LaKind from Penn State University and the University of Maryland and epidemiologist Michael Goodman from Emory University reviewed data from the National Health and Nutrition Survey (NHANES) that previous researchers concluded linked BPA to chronic diseases. Johns Hopkins mathematician Daniel Naiman did the analysis.
In contrast to those previous studies, which looked at only one, two or three datasets, these researchers found no associations between urinary BPA and heart disease or diabetes across four NHANES datasets. Their conclusions challenge one of the central contentions of researchers who believe that BPA is harmful.
The influence of the NHANES data in creating the popular belief that BPA is harmful cannot be overstated. The controversy originated just a few years ago, when bisphenol A was still a relatively obscure plastic additive that a group of obscure scientists had targeted as dangerously toxic.
Based on controversial studies of rodents injected with the chemical, they had come to believe that BPA was what they called an “endocrine disruptor” that did its dirty work at low doses. It distorted hormonal functions, they claimed, and could be blamed for a host of problems from cancer to reproductive and metabolic issues to heart disease. It was a controversial contention, as toxicity has traditionally been linked to exposure—the dose makes the poison, in Paracelsus’ famous phrase.
Heart disease theory rests on questionable data?
A key turning point in the debate came in 2008 with the release of a study based on the NHANES data covering 2003/4 of nearly 1500 adults in the Journal of the American Medical Association. A team of researchers led by David Melzer, an epidemiologist at the Peninsula College of Medicine and Dentistry at the University of Exeter in the United Kingdom concluded that respondents with higher amounts of BPA in their urine were more likely to report having heart disease and diabetes.
“This is a big deal,” said University of Missouri biologist Frederick vom Saal, the chief proponent of the “endocrine disruptor” hypothesis, who co-authored an opinion piece that accompanied the study in JAMA. He and John Peterson Myers, a biologist and longtime collaborator, demanded immediate regulatory restrictions on BPA and phthalates, another class of chemicals they contend is dangerous.
The associations were modest, which led the Food and Drug Administration to immediately reaffirm its belief that BPA was safe. But that’s not how it was played in the media and by advocacy NGOs, which flooded the Internet with hundreds of stories “linking” BPA to heart disease. Thousands of articles have since cited the NHANES study as “proof” of BPA’s harmful effects or otherwise casually asserted that BPA is “linked to” or “associated with” chronic heart problems.
After the release of yet another Melzer study based on more recent NHANES data, in 2010, the Natural Resources Defense Council hyperbolically characterized the findings as a “bombshell” as part of its campaign to connect common exposure to everyday chemicals to serious diseases, such as cancer—claims that are not supported by the evidence.
“Health care reform should be linked directly to toxic chemical reform,” wrote Gina Solomon, a scientist and former blogger for the NRDC. “Chemicals such as BPA are a potentially preventable cause of serious illness, and prevention saves lives and dollars.”
Cherry picking data?
The LaKind-Goodman study identified what appear to be two anomalies in the analysis by Melzer and two other related papers released in 2010 and 2011. A diabetes study included as diabetic people who did not have diabetes but had borderline symptoms—a non-standard definition of diabetes. Without those people included, the BPA-diabetes link disappeared.
The heart disease study found a weak association between BPA and heart disease—but it excluded six people who had the highest BPA concentrations. It turns out that none of those left out had heart disease. The inclusion of those respondents would have led to a finding of no association between BPA and serious heart problems. This contentious evidence led to the “bombshell” finding the NRDC crowed about.
The only explanation for leaving out the healthy respondents provided in the Melzer paper is that those excluded were “outside the range of BPA in the original 2003/04 sample,” which topped out at 80.1 ng/mL. According to epidemiologists I spoke with, they made an odd and arbitrary choice. In a blistering online response to the LaKind-Goodman study they now maintain that the excluded samples, which range from 83.6 to 150, and one outlier at 383, might have been “contaminated.”
In their response, Melzer et al. sharply challenged the overall thrust of the new study, calling it “unfocused” and “poorly documented, and noted that the LaKind-Goodman research was supported by the Polycarbonate/BPA Global Group of the American Chemistry Council. According to the paper, and under the rules of the peer review process, “the ACC was not involved in the design, collection, management, analysis, or interpretation of the data; or in the preparation or approval of the manuscript.”
In more substantive criticism, Melzer said that the new study had left out more than 400 survey respondents, implying those excluded could have skewed the results. LaKind and Goodman wrote they excluded survey respondents who omitted their age, body mass, smoking behavior or other variables to keep the data consistent. Melzer and the primary author of the diabetes study, University of Michigan doctoral pre-candidate Monica Silver, also pointed out that the new study included children in the diabetes assessment, which could also account for the different conclusions. Regardless, responded Lakind and Goodman, they claim they consistently found no associations between urinary BPA and heart disease or diabetes across four NHANES datasets.
Melzer pointedly noted that in a more recent study, published earlier this year, his team found that those who developed coronary artery disease tended to have higher urine BPA concentrations up to ten years earlier than those who did not develop heart disease.
The dispute over the data threatens to obscure the LaKind and Goodman’s most salient conclusion. NHANES is a robust and critically important public health database, they maintain. However, it only measures concurrent exposure to chemicals as reflected in urine, and not long-term impacts.
Limitations of NHANES survey to analyze BPA
“Our results don’t shed light on whether BPA is or isn't a risk factor for diabetes or heart disease,” said LaKind. “Rather, the point we are making is that using data from cross-sectional studies like NHANES surveys to draw such conclusions about relations between short-lived environmental chemicals and chronic diseases is inappropriate.”
Melzer brushed off that point completely in his response. But Monica Silver, who headed the diabetes study using the NHANES data, emailed me: “I completely agree [with LaKind and Goodman on this point] and make similar conclusions in our paper. NHANES’ utility is not in making broad statements of causation of a given disease by a given exposure, but rather in providing preliminary, hypothesis building evidence that can inform future work.”
Many science-challenged journalists and activist NGOs, like the NRDC and Environmental Working Group that put advocacy ahead of science, consistently misrepresent and hype studies that show the presence of chemicals in urine, as if that signals likely toxic effects. The use of biomonitoring data is problematic, say scientists, particularly as it pertains to BPA. According to the FDA reflecting the emerging scientific consensus, “[O]ral BPA administration [of BPA] results in rapid metabolism of BPA to an inactive [and therefore harmless] form.” In other words, BPA is detoxified and excreted.
That was confirmed in what is considered the state-of-the-art, independent study financed by the Environmental Protection Agency on the potential harm of BPA—headed by Justin Teeguarden, a senior scientist at Battelle’s Pacific Northwest National Laboratory, one of the nation’s premier research centers, to assess how humans process BPA. Their conclusion: Despite the presence of the chemical in urine, human blood concentrations of BPA are infinitesimally low—undetectable in most cases and thousands of times lower than any level that is likely to cause harm to humans.
Although low doses of certain chemicals can induce non-monotonic effects, scientists who have reviewed these studies, time and again, have come away unconvinced these effects consistently or even generally suggest harm. Since 2007, there have been more than a dozen comprehensive reviews of BPA studies by independent government scientists around the world, including in Canada, Europe, Japan, Australia and the United States, and each has concluded that current uses of the chemical are safe.
The European Food Safety Authority in summer 2010, a joint UN Food and Agriculture Organization/WHO expert panel on BPA in November 2010, and a special Advisory Committee of the German Society of Toxicology in spring 2011 have all independently concluded that the collective body of evidence demonstrates that BPA does not pose serious neurological dangers or cause cancer in humans, and has not even been shown to be an “endocrine disruptor,” although it does have modest but not necessarily harmful endocrine effects.
Most recently, in October, Health Canada and that country’s Bureau of Chemical Safety upheld its prior scientific finding that found BPA poses no serious threat. “Based on the overall weight of evidence,” reads the report, “the findings of the previous assessment remain unchanged and Health Canada’s Food Directorate continues to conclude that current dietary exposure to BPA through food packaging uses is not expected to pose a health risk to the general population, including newborns and young children.”
More on science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine is senior fellow at the Center for Health & Risk Management and STATS at George Mason University.
|
f2fac647852619ef4e6b566008a905fb | https://www.forbes.com/sites/jonentine/2013/01/24/odd-couple-will-dow-chemical-and-ed-markeys-opposition-to-natural-gas-exports-cripple-americas-energy-advantage/?ss=business%3Aenergy | Odd Couple: Will Dow Chemical and Ed Markey's Opposition to Natural Gas Exports Cripple America's Energy Advantage? | Odd Couple: Will Dow Chemical and Ed Markey's Opposition to Natural Gas Exports Cripple America's Energy Advantage?
Is Senate hopeful Rep. Ed Markey (D-Ma) abandoning key support for greenhouse gas reductions as political payback to Dow? Is he anti-shale gas regardless of its economic and environmental benefits?
Dow Chemical's recent resignation from the powerful National Association of Manufacturers (NAM) in a dispute over liquid natural gas (LNG) exports underscores its expanding split over this issue with the broader business community. But in a less visible way, it also suggests the nation’s largest chemical company is retreating from its long-stated commitment to address climate change.
The LNG issue boiled over last month when the US Department of Energy released the second of its delayed and highly anticipated reports on the potential impact of relaxing restrictions on natural gas exports. This analysis, prepared by NERA Economic Consulting, concluded “the US would experience net economic benefits from increased LNG exports”—and they could be staggering. Among other key points: any potential price impacts would be in a “relatively narrow range” and exports would result in “an increase in US households’ real income and welfare,” even among vulnerable lower income families.
NAM, along with another powerful industry group, the American Chemistry Council, endorsed the DOE findings, even though most of its members, particularly chemical companies, could see modest natural gas price rises, a marginal cost in manufacturing processes.
The researchers estimate that increasing exports could eventually generate more than $125 billion and as many as 5 million jobs—a jolt for the US economy still trying to shake off worldwide financial doldrums.
The Energy Department’s conclusions echo numerous comprehensive independent studies from across the ideological spectrum, including progressive analyses by the Hamilton Project led by Michael Levi, senior fellow and energy expert at the Council on Foreign Relations and research directed by Charles Ebinger at the Brookings Institution . This is a rare instance in which liberals and conservatives as well as industry and thoughtful environmentalists mostly agree.
The primary critics constitute an unlikely coalition: Dow, which spearheads a handful of companies likely to take a short term hit from higher industrial natural gas prices; and a cadre of anti-hydraulic fracturing liberals, who traditionally have opposed almost all measures that could lead to increased shale gas exploration.
That group is headed by Congressional energy heavyweights Rep. Ed Markey (D-Ma), poised to succeed John Kerry as senator if the current Massachusetts senator is approved as Secretary of State; Rep. Henry Waxman (D-Ca), the ranking member of the House Energy and Commerce Committee; and Sen. Ron Wyden (D-Or), the presumptive new head of the Senate Committee on Energy and Natural Resources.
Dow and anti-LNG export Congressmen are dressing their opposition in populist garb. Both say it will raise prices, hurting consumers and industry alike. Last year, both Wyden and Markey told Energy Secretary Steven Chu that increasing exports would amount to a “transfer of wealth from consumers to oil and gas companies.”
It’s a classic and often effective political argument, pitting Big Gas against the poor, or in Dow’s view, Big Gas against poor Big Chemical. It’s classic populism, which has guaranteed that their position would make the rounds of activist web sites. All it lacks is evidence.
Reports from a variety of outlets, including the DOE and the US Energy Information Administration, which did the first part of the two required DOE analyses, show that most LNG would be sourced from new natural gas development, which means exports are unlikely to materially impact domestic demand or prices. EIA found that increased exports would initially raise consumer electricity bills by 1-to-3 percent annually, before subsiding. Both Levi and the DOE estimate the rise in industrial prices would not exceed 10 percent at most at its peak, before dropping.
LNG exports could at most cost on average about $4 a month adding “as much as $50 to the annual electric bills for the poorest American households by the end of the decade,” Levi write. “But the federal Low Income Home Energy Assistance Program could help shield the most vulnerable as long as its financing is protected.”
So what’s going to happen next? There's no current ban on exports of natural gas. Supplies are exported to a limited number of countries with which the US has a free trade agreement. However, regulations passed when the country faced a severe shortage and high prices still require the government to determine whether an export permit is in the “public interest.” The DOE report was considered the next to last step in that process. The final decision is now in the hands of the Administration.
Behind the curtain at Dow
What is the “public interest”—the notion supposedly guiding the government’s independent final decision—can be highly politicized. Public advocacy groups, Congressmen and self-interested corporations can all play a role in a decision that’s supposed to be guided by science and economics.
Since the 1990s, Dow Chemical, America’s largest chemical company, has tried to brand itself as forward thinking on energy and environmental issues. It reduced the carbon footprint of its manufacturing processes—a win-win situation that actually saved the company money while gathering plaudits from activists.
And, over the last decade, lavishly supported by federal tax policy and handouts guided to it by the powerful Massachusetts Congressional delegation led by Markey, Dow expanded its “green” product line. It began turning out advanced insulation products, diesel engine filters and lithium-ion batteries.
The sizable government largesse helped forge a pragmatic bond between top Dow officials and Markey, an outspoken campaigner for climate change initiatives and against shale gas development. They haven’t always been on the same page. For example, Markey has opposed a chemical security measure, which Dow has backed. But that issue is small potatoes compared to climate change, one of Markey’s signature issues.
In 2009, Dow broke ranks with many manufacturers to aggressively support the American Clean Energy and Security Act, the Democrats’ cap-and-trade bill, jointly sponsored by him and Waxman in 2009. Many on the left and the right savaged the bill as both weak and a pork-fest. After barely passing in the House it went down to bi-partisan defeat in the Senate in 2010. Although it never became law, it provided a huge political boost to Markey and Waxman—and Dow’s support was crucial. And now Markey and Dow are aligned once again.
Within days of the release of the government study, Dow held a news conference on January 10 denouncing it and announcing the launch of its anti-natural gas export website America’s Energy Advantage. Dow, which public records indicate has spent more than $45 million on lobbyists over the last six years, recruited as partners Alcoa, American Public Gas Association, Celanese, Eastman Chemical and Nucor— corporations likely to face marginally higher natural gas energy bills if industrial prices rise modestly with export deregulation, as anticipated. Huntsman Chemical joined the rump group on Tuesday.
Before the DOE report came out, Dow, guided by its public relations team, had characterized itself as “in the middle” on this issue. All last year, the company had repeatedly declared itself a “proponent of fair and free trade,” stating that it “opposes policies that arbitrarily limit reasonable experts of natural gas to free-trade agreement countries or that provide for unlimited global exports.” As George Blitz, vice president of energy and climate change and a corporate vice president, claimed as late as October, it just wanted a balanced public policy.
Behind the scenes, however, Dow was taking a far different position. According to sources inside the company, executives had learned from lobbyists with connections to the Energy Department that the analysis was “not going well”—by which they meant the DOE was going to report LNG exports would be broad positive to the American economy, help promote US geopolitical interests and produce more environmental benefits than challenges. Dow was reportedly apoplectic.
Expecting the worst from its perspective, Dow held crisis meetings during the late summer, eventually concocting a response plan that included a web presence and an extensive social media campaign. On September 12, Dow purchased the website America’s Energy Advantage, which is now registered privately. It created Facebook and Twitter accounts in September and began posting in Facebook on October 12 and sent its first Tweet on October 18.
As its anti-shale gas export attack response plans were percolating Dow was moving on another front to secure its commitment to exporting natural gas. Yes, you read that correctly. Dow owns a sizable chunk of Freeport LNG, which owns a LNG import terminal in Quintana Island, Texas that is now almost dormant after only four years of operation. Dow and its partners are seeking approval from the Federal Energy Regulatory Commission and the DOE—yes, the same DOE it is now demonizing—to spend $2 billion on its conversion with the aim to have the facility exporting 1.4 billion cubic feet of natural gas daily by 2015.
"The discovery of shale has really recreated the value proposition to build these facilities in what is the world's largest market," Andrew Liveris, Dow's chief executive, gushed about its Texas expansion plans.
All the while, Dow continued to voice support for LNG exports—it was, after all, a player itself—even portraying itself as supportive of the DOE’s evaluation process. “I think they are doing a great job of looking at the issue,” said Blitz in October. “They are being conscientious. They are thinking it through. I am confident they are going to come out with a reasonably good answer…that talks about energy security, consumer pricing, and domestic growth and jobs because I think those are public interest issues.”
And in fact, that’s exactly what the DOE did in its analysis: reasonably assess the complicated impact of LNG exports across a range of economic and environmental variables. Although the Energy Department’s findings were well received by most economists across the political spectrum, Dow along with the coalition of anti-shale gas Congressmen rejected them outright.
I asked Nancy Lamb, Dow’s chief spokesperson, to arrange an interview with Blitz or any Dow executive to explain the background on AEA or a host of other puzzling contradictions: Dow’s ties to Markey, its ownership of an LNG export facility whose benefits it extols, what appears to be the abandonment of its commitment to limiting global warming and its apparent inside/outside maneuvering on a range of issues—but she said Dow declined.
“Our position on LNG exports has not changed,” Lamb wrote in a boilerplate response that echoed the talking points at the January news conference. “The report failed to give due consideration to the importance of manufacturing to the US economy. Further we do not believe that any one report gives the whole picture and we are asking the DOE to ensure that it remains cognizant of the country’s long-term growth potential when making any decisions.”
Dow, through AEA, now writes on its website that it opposes “unfettered” free trade and urges “caution.” It’s hoping to indefinitely delay a final decision by the DOE and the Administration.
Geopolitics of natural gas
According to the independently produced International Energy Agency World Energy Outlook 2012, the global energy map is being redrawn on the back of shale gas. The United States was recently facing the prospect of long-term scarcity. Now the US could pass Russia and Saudi Arabia as the world’s top energy producer during the 2020s while dramatically decreasing its carbon footprint—unless political forces conspire to turn back the clock.
Natural gas spot prices in the US currently hover around $3.70 per thousand cubic feet. Historical low prices have been a boon for consumers who are saving billions of dollars as they switch from more expensive and far dirtier coal and oil. It’s also led to an energy security windfall.
While natural gas prices are rock bottom in the US, they remain exorbitant in Europe and Asia—four to five times as high—where a lack of a geological industry infrastructure to support sophisticated new mining techniques and political opposition to fracking have paralyzed politicians. Europe remains heavily dependent on dirty coal, Russia’s Gazprom and Middle East supplies while Asia relies on coal, nuclear energy and massive foreign oil imports.
The past few years have seen several countries, including the Netherlands, Argentina, Brazil, Kuwait, Thailand, the UAE and Indonesia, once the world’s largest LNG exporter, join the ranks of natural gas importers. More than a dozen other countries are considering or constructing import facilities. The market for US exports is potentially huge. And the potential to reconfigure geopolitical alliances in America’s favor are immeasurable, say foreign policy and energy experts.
Just a few years ago, LNG import terminals were being constructed across North America in response to the then dwindling domestic natural gas supply. Most of them are underused. Today, the public debate is no longer about how to squeeze out more domestic supplies to limit expensive oil and gas imports and cap domestic coal usage but how much natural gas we should export—and that requires approval from the DOE.
Many domestic import terminals are petitioning the government to convert their facilities for exports. So far, the administration has approved exports to nations with which the US has a trade agreement, but only one of about fifteen facility applications to export LNG to non-free trade countries—Cheniere Energy’s Sabine Pass facility in Cameron Parish, Louisiana. However, even in the case of Sabine Pass, final approval has been held up by the opposition by anti-fracking environmental groups, led by the Sierra Club, and in anticipation of the now-released DOE impact study.
Competition looms if the US is slow to respond to the windfall. In British Columbia, Canada, various LNG export projects seem well positioned—geographically and otherwise—to access the lucrative Asian market.
What would be the consequences if the Dow-anti-shale gas coalition should prevail? A decision to cave to political pressures and constrain natural gas exports could rock world trade markets and even damage our green energy aspirations. “For example,” wrote Levi, “the United States has filed with the World Trade Organization a challenge to Chinese restrictions on exports of so-called rare earth minerals, which are crucial for new technologies like wind turbines, missiles and smart phones. If Washington hypocritically limits gas exports, it might as well write the Chinese brief.”
US long-term energy security may hang in the balance. Most economists believe that a healthy export market would also serve to stabilize domestic prices and supply. With natural gas so cheap and plentiful, and with margins so thin or even nonexistent, most companies are losing money even as supplies increase. Consequently, few companies are exploring for new reserves. That’s led to concerns that volatile, downward-sloping prices could abort the natural gas boom that experts say currently delivers a $100 billion yearly jolt to the American economy. Exports, the reports contend, rather than slowing the benefits of the boom, could actually increase them and put the natural gas market, in the US and worldwide, on a more sustainable footing.
And that’s the political rub. Even as support for shale gas extraction grows among more enlightened environmentalists, Dow’s AEA along with anti-shale gas congressmen, mostly Democrats, appear determined to abort the natural gas revolution any way they can.
Dow’s AEA appears so desperate to make its case that it’s taken to exaggerating its support among high profile energy experts, who are almost universally supportive of expanding LNG exports. It lists Daniel Yergin, vice chairman of HIS, which now owns the firm he co-founded, Cambridge Energy Research Associates. In fact, Yergin recently wrote an op-ed in The Wall Street Journal in support of LNG exports. “How can America, having asked Japan to reduce Iranian oil imports, turn around and prohibit the export of surplus natural gas to this key ally?” he wrote.
Many critics of exporting shale gas, including Congressmen, are environmental ideologues who have come to believe that natural gas is an unacceptable energy option regardless of its vast potential benefits. Its advanced extraction procedure, hydraulic fracturing, is just too dangerous, they say. The nation should not provide any incentive, including exporting gas, which could result in expanding gas exploration. They contend that if the US expands LNG exports, it would be exporting greenhouse gases with the fuel, making it harder to combat global climate changes.
No serious study has supported those claims. All the key studies concluded that potential environmental impacts would be limited and manageable. Even a tiny price rise would cut domestic demand, lowering the overall US carbon footprint. And international buyers are eager for natural gas so as to cut back on the most carbon-polluting source, coal and, as in the case of Japan, nuclear energy.“[E]xports would likely reduce global greenhouse gas emissions,” Levi wrote.
Analysts also challenged the belief that exports would somehow damage the fledgling alternative vehicle market. “[T]he small price increases that would result from allowing exports would have at most a marginal impact on the use of natural gas as fuel for cars and trucks,” Levi wrote. “Blocking exports wouldn’t push natural gas into automobiles—it would mostly keep it in the ground, because there would be less incentive to extract it.”
Markey—Dow Chemical connection
While it’s not clear whether litmus test politics explains the continued criticism of LNG imports by the troika of Markey, Waxman and Wyden, it explains some of it. Why else would Markey remain so belligerent about expanding natural gas exploration when his own state actually imports almost half of its natural gas supplies—not from the Marcellus Shale formation in Pennsylvania, which is experiencing an export boom, but from Yemen.
Currently, the US imports about 8% of its gas, the bulk of that via pipeline from Canada. But a sizable chunk of LNG imports are from Yemen and go mostly to Massachusetts, where political wrangling fueled by advocacy groups has blocked construction of the necessary pipelines to take advantage of overflowing domestic supplies. That’s left Massachusetts vulnerable to price spikes. Earlier this year, shortages occurred after two Yemenese LNG tankers bound for Boston never sailed because of Al-Qaeda inspired terrorist attacks.
Markey has actually called for the construction of new pipelines to bring gas from Pennsylvania but its falling on deaf ears in the natural gas industry, which has been the focus of his relentless attacks for years. Such are the real-life consequences of anti-shale gas politics.
His inelegant attempt to have it both ways on the issue was on stark display last spring during the Yemen crisis, when he tried but failed to twist the facts to fit his fierce his fierce opposition to LNG exports should be blocked. “These natural gas supply problems highlight the importance of developing the domestic infrastructure that would allow all Americans to benefit from the low-price, abundant and secure supplies of natural gas now being produced in the United States,” he wrote to DOE chief Chu in an extended rant against LNG exports. “I believe that using our domestically produced natural gas here in America to reduce our dependence on foreign supplies should take precedence over any plans to export our natural gas,”
Unfortunately for Markey, his point A—the critical importance of upgrading our natural gas infrastructure—would be undermined by his point B—block new exports on the grounds that it would be a overall net plus to consumers and world stability.
Levi, Ebinger and most recently the DOE addressed Markey’s dubious thesis that expanding LNG exports would increase our dependence on foreign supplies. They concluded much the opposite—a growing export market would increase energy security by expanding overall supply while also providing invaluable geopolitical chips to support US foreign policy goals, all with only minor impact on domestic prices.
“[T]here is potential for positive foreign policy impacts from US entry in the global gas market, through both increased supply diversity for strategic gas-importing allies, and as a contributory factor in weakening the oil linked contract pricing structure that works to the advantage of rent-seeking energy suppliers,” Ebinger and colleagues concluded in their Brookings analysis.
Markey’s immediate response to the DOE study—reiterating stale, year-old talking points— illustrates what can happen when politicians put ideological priorities ahead of independent science and economic analysis.
“This report confirms that exporting our natural gas will lead to some big winners and many big losers in our economy,” Markey wrote in a statement posted online shortly after the release of the DOE report. “American consumers and manufacturers will be the losers, as exporting natural gas will increase domestic prices by up to 30 percent, and reduce domestic investment and wages by $45 billion per year by 2030.”
That contradicts the findings of the DOE consultants, as well as Ebinger and Levi, who all concluded there would be many winners and very few losers. Because of the net benefits to the economy, consumers would be among the biggest beneficiaries.
I asked Markey’s press spokesman, Eben Burnham-Snyder, to name even one position on LNG exports that Markey modified after the series of studies by the EIA, Ebinger, Levi and now the DOE, but he refused. “Go read Mr. Markey's letter to DOE,” was the best he could muster.
So why is Markey suddenly feeling such sympathy for Dow and other chemical manufacturers, about the only segment of American industry likely to take a serious short term hit if exports are greenlighted and artificially low gas prices rise to more stable levels? Is it pure coincidence that Dow has two large plants in or near his district, in Marlborough and North Andover, both purchased from Rohm & Haas in 2009? They are key hubs in the company’s $14 billion chemical and materials business, and employ approximately 1500 workers.
Dow’s flip-flops and inside-outside strategy make sense from a crude, self-interested business perspective. With economists and environmentalists on the left and the right challenging its lonely position, its reacting more like a caged animal than a responsible corporate citizen. Their reactions, as clumsy as they have been, are at least understandable.
But Markey? We set a higher standard for our public officials who we hope will put the broader public interest ahead of politics.
Jon Entine is senior fellow at the Center for Health & Risk Communication and at the Statistical Assessment Service (STATS) at George Mason University.
|
0254b84cfbabbc5b5db5d63f9722b60f | https://www.forbes.com/sites/jonentine/2013/02/19/farmers-deception-at-center-of-monsanto-gmo-soybean-scotus-patent-challenge-genetic-innovation-threatened/ | Supremes Unsympathetic to Farmer's Deception at Center of Monsanto GMO Soybean SCOTUS Patent Challenge | Supremes Unsympathetic to Farmer's Deception at Center of Monsanto GMO Soybean SCOTUS Patent Challenge
NOTE: Updated at 6:50pm EST after a day of hearings on Bowman v Monsanto.
As the Genetic Literacy Project’s Jon Entine reports, an Indiana farmer’s attempts to portray himself as David in the face of heartless Monsanto-the-Goliath wither as evidence of deceit emerges.
The Supreme Court heard the first day of oral arguments Monday in a case that could upend the biotech industry and DNA patent law, and have broad impact on biotech research. The legal echoes also could extend far beyond genetics.
Said baldy, the case revolves around what appears to be a deliberate attempt by one farmer to circumvent the law. Bowman v Monsanto is the long-anticipated square off between a 75-year old Indiana farmer and the world’s largest agricultural biotechnology firm. The decision will turn on the minutiae of patent law, but the implications will extend to all cutting-edge technologies.
But observers at the Supreme Court report that the justices, particularly those part of the more liberal bloc, appeared unsympathetic to Vernon Hugh Bowman, the farmer at the center of the case.
Justice Stephen Breyer told Bowman's lawyer, Mark Walters from the firm of Frommer Lawrence and Haug, that Bowman, who was in the courtroom for oral arguments, could use the seed he had purchased for other purposes but could not harvest the crop from the next generation of seed.
"You know there are certain things that the law prohibits," he said. "What it prohibits here is making a copy of the patented invention. And that is what he did."
Likewise, Justice Elena Kagan clashed with Walters over his assertion that Monsanto could protect its patent rights by having contracts with farmers. "All that has to happen is that one seed escapes the web of these contracts," she said. That single seed, "because it can self replicate in the way that it can, essentially makes all the contracts worthless," Kagan added.
Without the existing patent protections, argued Monsanto's lawyer Seth Waxman, "Monsanto could not have commercialized its invention." Justice Department lawyer Melissa Sherry reinforced the St. Louis-based corporation's position, arguing that a victory for Bowman would strike a blow to all patent holders.
The Associated Press is reporting that none of the justices seemed ready to side with Bowman. "Why in the world would anybody spend any money to try to improve the seed if as soon as they sold the first one, anybody could grow more and have as many of those seeds as they want?" said Chief Justice John Roberts.
Monsanto has invested hundreds of millions of dollars developing the world’s top selling genetically modified seed and was granted patent protection for its investment. The seeds are engineered to require the use of less weed killers than used by conventional farmers, who use chemical sprays that can damage crops and reduce yields. It is specifically modified to be used with glyphosate, marketed by Monsanto as Roundup, an herbicide considered less harmful to crops, less environmentally harmful than the range of pesticides used in conventional farming and toxicologically benign.
According to farmers and scientists, the GM seed-glyphosate package is a unique product, far better than conventional soy seeds. The Supreme Court brief filed by Monsanto notes that more than 90 percent of the U.S. soybean crop now begins with Monsanto’s Roundup Ready seeds.
Here’s the rub: Every soybean plant produces enough seeds to grow approximately 80 more plants. Because of Monsanto’s huge investment in developing its Roundup seed, the biotech firm has insisted that farmers who want to use the technology sign licensing agreements that limit their use to a single season. Applying prevailing patent law, Monsanto also forbids farmers from planting second-generation seeds harvested from first-generation crops.
That arrangement long rankled Bowman, who often wears a Monsanto hat and sings the praises of the company’s genetically modified soybean seeds designed to resist glyphosate, which he acknowledges is far milder than the chemical sprays he otherwise had used, and does not harm his crops.
Bowman’s deliberate misuse of feed grain
His stated admiration for the GM seed didn’t stop Bowman from deliberating trying to game the patent system. He purchased Roundup Ready seeds for one season, and they worked as advertised. “It made things so much simpler and better. No question about that,” he recently told National Public Radio.
But Bowman also wanted to plant a second crop of soybeans later in the year in fields where he just harvested wheat. Those late-season soybeans are risky, he told NPR, because the yields are always smaller. To cut his risk, he hatched a scheme. “What I wanted was a cheap source of seed,” he acknowledged.
Cleverly (as anti-Monsanto and anti-GMO activists would have it) or fraudulently (as Monsanto, the biotechnology industry, the US government, many universities and most patent experts believe) Bowman recognized that since Monsanto’s seeds were ubiquitous in his region, the commodity soybean seeds sold by local grain elevators would necessarily contain mostly Roundup Ready seeds.
Starting in 1999, he bought ordinary soybean seeds from a small grain elevator where local farmers drop off their harvest. “They made sure they didn't sell it as seed, which would have violated their patent agreement with Monsanto. Their ticket said, ‘Outbound grain’,” Bowman acknowledged, which means by the conventions of the farm the seeds were supposed to be use as feed grain.
The farm industry has always depended upon an honor system to differentiate feed grain from crop grain. Bowman, the self-proclaimed David, chose to deliberately upend that. He guessed he could circumvent his legal agreement and save a bundle at Monsanto’s expense by discontinuing buying the Roundup seeds and instead purchasing far less expensive local commodity feed grain. And when he treated his second crop with herbicide, he was proved right—most of the budding plants were resistant.
Activists claim Bowman was acting ethically, saying that Monsanto’s restrictions hurt farmers, cause higher food prices and, according to one friend-of-the-court brief, contribute to “the suffocation of independent scientific inquiry into transgenic crops.” Whether those assertions are true or not—most independent analysts believe such characterizations are hyperbolic or just plain wrong—the case will turn on patent law. In other words, was Bowman acting legally?
Bowman and supportive briefs by such anti-GMO crusaders as the Center for Food Safety (CFS) contend he merely ambled through a legal loophole in his license agreement that allows farmers to sell the second-generation seeds to grain elevators, which, in turn, are permitted to sell a mixture of undifferentiated seeds as “commodity grain.” Monsanto holds that commodity grain is explicitly restricted for use as feed, not cultivation, and its use, even as a second season commodity purchase violates the one-time use license agreement. Bowman and CFS respond that he should not be held victim because Monsanto screwed up in its patent protections and he doesn’t believe he did anything wrong.
Understandably concerned, Monsanto sued him for infringing its Roundup Ready patents by using, without authorization, second-generation seeds embedded with its technology. Monsanto contends that if Bowman prevails in this case, farmers could save seeds from one season’s crop to plant the following year, eviscerating its hard-earned patent rights.
The farmer’s lawyer, Mark Walters, who is volunteering his time, argued that under the doctrine of patent exhaustion (interpreted most recently by SCOTUS in Quanta Computer v. LG Electronics), in using second-generation seeds, the farmer was merely reselling a patented product he’d already paid for. Walters contends he is merely restating a very old principle in patent law: If you buy something that’s covered by a patent—let's say it's a cell phone, he says as an example—you own it, outright, and the patent protections disappear.
Courts back Monsanto
Indiana U.S. District Court disagreed, granting summary judgment to Monsanto, awarding the company $85,000 in damages. The Federal Circuit Court of Appeals, which had already twice upheld Monsanto’s right to bar farmers from planting second-generation seeds, also sided with Monsanto, finding the company’s patent rights were not exhausted in the first-generation harvest. Even if patent exhaustion did apply, the appeals court additionally held, Bowman infringed anew by growing a crop that that included Monsanto’s patented technology, effectively decimating Monsanto’s rights as a patent holder.
The Supreme Court had declined to review those two previous Federal Circuit decisions, but the justices granted Bowman’s cert petition. His lawyers then filed a merits brief, asserting that the Federal Circuit’s ignored Supreme Court precedent set in 2008 by the Quanta computer v LG Electronics decision, which the farmer claimed dictates that patent rights are exhausted after the authorized sale of a patented product.
Monsanto counters that the issue is a patent holder’s right to impose restrictions on the use of its technology, which extends to unauthorized copies of patented products. Monsanto’s brief warns that if the justices adopt Bowman’s position (as Monsanto articulates it) that “patent law treats as per se unenforceable all restrictions imposed by license on the use of a patented article following an authorized sale,” the biotech industry will be devastated.
The U.S. government has sided with Monsanto. In an amicus brief, it argued that Bowman was misinterpreting the doctrine of patent exhaustion. “Under longstanding principles of patent exhaustion, an initial authorized sale of an article embodying the patented invention exhausts the patentee’s exclusive rights to control the use and sale of that article,” the DOJ maintained. “It does not, however, exhaust the patentee’s right to exclude others from making a new article embodying the same patented invention. Accordingly, even if respondent’s patent rights in the commodity seed had been exhausted, petitioner acquired no right to use that seed to make newly infringing seed.”
Legal precedents may drive SCOTUS but the real issue hanging in the balance is the relationship of patents to innovation. Every invention and new creative work eventually becomes “public property” when patents expire. In fact, Monsanto’s last patent on the soybeans Bowman used will expire next year. What would be the effect if anyone can break patent protections because they find a technical loophole in the law? asks Robert Atkinson, president of the nonpartisan Information Technology and Innovation Foundation. “When one free rider skims off the top, everyone else ends up paying more, and innovators get less to invest in the next round of innovation,” he writes.
Center for Food Safety brief undermines Bowman’s position
Bowman’s case has been taken up by the usual array of “public interest” groups, many of which are openly hostile to business and would be thrilled to see the biotech industry and especially Monsanto—its icon of evil—gutted or at least put on the defensive. Patents have “given seed companies enormous power, and its come at the detriment of farmers,” claims Bill Feese of the anti-GMO CFS. Citing 140 lawsuits filed by Monsanto to protect its patent, CFS portrays the company as a bully.
CFS claims in support of Bowman appear bizarre and in fact may help undermine his case. They contend that the “natural” purpose of all seeds is for replanting and that the patents interfere with the natural cycle of life. But those claims, however, dubious, if true would not help Bowman much. If as it appears he bought Monsanto’s seeds clearly labeled “Outbound grain” then he was taking something labeled as feed grain and using it as seed for human crops. As Karl Haro von Mogel writes on Biology Fortified, this may seem like a silly distinction because grains are seeds, however, as he notes legal systems are built upon such distinctions. “Bowman didn’t buy ‘seed’ for planting, he bought ‘grain’ for processing and changed its foreseeable use to planting as ‘seeds’ thus undermining his argument that replanting was a natural step,” he writes.
The amicus brief filed by soybean, corn and wheat farmer associations illustrates the industry view of the role that patents have played in agricultural innovation. From 1836 to 1924, the Department of Agriculture and its predecessors provided free seeds to farmers. While well intentioned and rather than producing a world freed from the tyranny of patents as envisioned—the argument offered again today by Bowman and his supporters—the impact on farm production proved disastrous. Average national yield for corn production decreased from 24.3 to 20.5 bushels per acre between 1866 and 1930, according to the growers brief:
Although soybeans and other crops have been cultivated for centuries, advances in plant genetics were historically stifled by a lack of incentives to invest in new technologies and breeding techniques. Genetic innovation in soybeans grew exponentially, like Jack’s magical beanstalk, after this Court’s 1980 decision in Diamond v. Chakrabarty … which confirmed the applicability of utility patent protection to qualifying organisms. No other country possesses the United States’ prolific record in developing new crop varieties. Without the protection of intellectual property afforded by the U.S. legal system, seed and biotechnology companies would not have undertaken the expensive and time-consuming research necessary to improve plant technology.
The impact of the SCOTUS decision will extend far beyond the biotech industry. As Monsanto noted in its brief, “Investors are unlikely to make such investments (in biotech companies) if they cannot prevent purchasers of living organisms containing their invention from using them to produce unlimited copies.’ Live vaccine development, stem cell research, nanotechnology or DNA used for medical treatments might all be impacted adversely. That’s why many universities and laboratory instrument makers, among many groups concerned that a decision for Bowman could devastate incentives for research, have joined with the federal government in siding with Monsanto’s position
Intellectual property increasingly drives international economic competition. With this decision, expected in late spring or early summer, SCOTUS will have a lot to say about how the U.S. thinks about innovation going forward.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine is executive director of the Genetic Literacy Project and a senior fellow at the Center for Health & Risk Communication and the Statistical Assessment Service (STATS) at George Mason University.
|
9edc84b108b230d861c6cc1e7fb721b2 | https://www.forbes.com/sites/jonentine/2013/04/02/exposing-the-anti-gmo-legal-machine-the-real-story-behind-the-so-called-monsanto-protection-act/?utm_source=allactivity&utm_medium=rss&utm_campaign=20130402 | Exposing the Anti-GMO Legal Machine: The Real Story Behind the So-Called Monsanto Protection Act | Exposing the Anti-GMO Legal Machine: The Real Story Behind the So-Called Monsanto Protection Act
Over the past week, we've seen a tsunami of stories about the so-called “Monsanto Protection Act,” more accurately known as Section 735 of HR 933. The Genetic Literacy Project’s Jon Entine reports the story behind the story.
It’s a small provision attached to a massive agricultural spending bill signed into law by President Obama last week. According to agricultural biotechnology detractors, Section 735 is the “most dangerous food act ever” and a “terrifying piece of policy”. Why? Because, they claim, among other things, it purportedly allows biotech companies to sell seeds that can cause serious consumer health problems. Here is how Gawker frames it:
Section 735 effectively shields large biotech companies, like Monsanto, from the federal courts in case something is found to be harmful in their genetically-modified seeds. Because of Section 735, federal courts would be powerless to stop Monsanto from selling their product.
Just as “shocking,” activists claim, the provision was secretly written by Monsanto, stealthily inserted into the bill in the dead of night by its Congressional backroom lackeys and then placed on the desk of President Barack Obama, who is so in hock to biotech special interests that he sold out the public and signed the bill, rider intact, therefore undermining American democracy. No kidding. That’s the way even mainstream bloggers and news outlets discuss this legislation. “Monsanto teams up with Congress to shred the Constitution,” shrieked one Huffington Post headline. Hundreds of thousands of angry anti-GMO protestors have signed online petitions expressing their outrage.
Let’s separate the facts from the fury.
The “stealth" claim is just plain wrong. This provision was drafted last year and has been in printed versions of the bill that have been circulating widely in Washington for more than nine months; no one, let alone hyper-vigilant anti-GMO campaigners, were caught by surprise. For example, late last fall, Stonyfield Farm, a division of Dannon that makes organic diary products and is actively engaged in a range of anti-technology agricultural campaigns, ran a blog post demonizing the provision that its opponents now claim was written in the dead of night and slipped into the measure.
Even if the courts find that a (genetically engineered) crop shouldn’t be planted until more research is done about its safety, no one could stop that crop from being planted, even temporarily. This provision clearly tells us that Congress thinks public health and safety should take a back seat to the expansion of GE crops.
The Stonyfield blog post raises the second gross mischaracterization now making the rounds of anti-GMO websites and many mainstream news outlets: allegations that the provision would benefit the biotech industry at the expense of consumer safety. For example, Russia Today news claims the rider “would strip federal courts of the authority to immediately halt the planting and sale of genetically modified (GMO) seed crop regardless of any consumer health concerns.”
This characterization is hokum. What does the biotech rider actually state and what is it designed to address?
To date, no court has ever held that a biotechnology crop presents a risk to health, safety or the environment. But make no mistake: it’s not because the courts or the government approval process is lax. Just the opposite. Getting approval for any transgenic crop or food is like running a torturous gauntlet, both arduous and bureaucratic. Companies are required to provide years of internal and independent data, which are carefully reviewed by various government agencies.
Beyond that, the USDA cannot approve a new seed variety until it conducts an Environmental Assessment. This is the point where the process gets even messier—and where anti-biotech activists and lawyers do their best to gum up the works in hopes of generating a critical mass of negative public opinion.
By law, the EA must consider any and all factors relating to the “human environment,” which is very ambiguously defined, leaving all kinds of legal openings for hostile groups to target. If an anti-science group such as the Center for Food Safety or the Institute of Responsible Technology or the Union of Concerned Scientists challenges the EA for not considering one issue or another, the assessment can be deemed insufficient and a new one must be ordered.
In fact, this has happened twice in recent years, with alfalfa and sugar beets. Alfalfa hay, a nutritious, easily digestible livestock feed and $8 billion a year business, is the country’s fourth-most-valuable crop. Monsanto makes GM alfalfa seeds, as part of the company’s Roundup Ready line. They are genetically modified to tolerate glyphosate, an herbicide that is commercially known as Roundup. When farmers use Roundup, which is considered a mild herbicide, instead of other harsher chemicals to kill weeds, they actually cut down on overall toxic chemical use.
After an exhaustive review, the USDA gave Roundup Ready Alfalfa the green light in 2005. But the Center for Food Safety contended that the government hadn’t adequately evaluated the potential environmental consequences, although the agency believed it had. In 2007, in Monsanto Co. v. Geertson Seed Farms, a federal court agreed with CFS, prohibiting Monsanto from selling Roundup Ready Alfalfa pending yet another assessment.
This was incredibly disruptive to thousands of farmers who had planted alfalfa, which is a perennial crop so does not have to be reseeded each year. The legal status of a field of GM alfalfa planted legally after the USDA had deregulated GE alfalfa was suddenly changed under the court ruling. Farmers were being told that they had to follow a new set of rules in handling their crop. For more than four years, they didn’t know if the technology was going to be available for their use. The confusion and patchwork of conflicting regulations, court decisions and labeling requirements dealt a sizable economic blow to one of the country’s most important export crops. The ongoing chaos was exactly the kind of commercial uncertainty that anti-GMO forces were hoping to manufacture.
The alfalfa case standoff eventually made it to the Supreme Court. The evidence in support of the safety and public benefits of GM alfalfa was so strong that in 2009, the Obama Administration had Solicitor General Elena Kagan file a brief on the biotechnology company’s behalf, even though the government was not a defendant in the appeal. To no scientist’s surprise, in June 2010 SCOTUS overturned the lower court's injunction that had prohibited Monsanto from selling pesticide-resistant alfalfa seeds.
“An injunction is a drastic and extraordinary remedy, which should not be granted as a matter of course," Justice Samuel Alito wrote for the 7-1 majority, concluding that the US District Court in San Francisco had "abused its discretion.”
The temporary injunction, by then determined to be abusive, proved a financial disaster for the farm industry and many individual farmers who had suspended planting alfalfa pending a final resolution. The almost identical disaster scenario has played out over sugar beets. 95% of which is from GE seeds. In 2010, the Center for Food Safety and some organic farmers who stood to gain by attacking conventional and GM crops convinced a court on procedural grounds—there was no finding of environmental or health dangers—to void the five-year-old approval of transgenic sugar beet seeds. Despite no evidence of any potential harm, that November, a federal judge ordered the GE sugar beet seedlings—all but 5% of the nation’s crop— pulled from the ground, as required by law. If that decision had stood, it could have destroyed as much as half of America’s granulated sugar production on purely technical grounds. The saga only ended in July of last year when the USDA ruled once and for all to allow unrestricted planting of Monsanto’s GMO sugar beets.
It was a victory for science, but the professional “antis” considered it a victory as well. After all, they had caused billions of dollars in unrecoverable damage to the American farm economy and rattled the cages at the corporations they considered “evil”. That’s certainly something to crow about.
The rider was specifically designed to prevent such egregious abuse of the court system and regulatory process. The legislation does not, as critics allege, allow farmers or Monsanto to sell seeds proven to be harmful. Rather, it provides legal consistency so farmers and businesses do not get yanked one way or the other based on the temporary findings of competing court systems as activist challenges make their way up the legal food chain. Going forward, the provision will protect farmers who buy GM seeds and plant them under the belief that it is legal to do because the seeds have been subjected to extensive USDA scrutiny and approval.
The so-called Monsanto Protection Act has nothing to do with consumer safety or limiting biotech’s liability for making “dangerous” products, as even mainstream news outlets, like Salon, have claimed. That’s just bad reporting. In the two cases heralded by campaigners—alflafa and sugar beets—only technical concerns about the evaluation process were raised; consumer safety and the environment was never the issue.
No product is exempt from the law when health or environmental problems are identified. If the USDA or a court determines that a biotech seed or crop or food does not pass environmental or health muster, it will be pulled from the market and banned. But until that happens, because of this provision—let’s call it the Food and Farmer Safety and Health Protection Act—USDA safety determinations cannot be arbitrarily overturned by rogue courts responding to anti-science activists—what appears to have happened in the alfalfa case, until the Supreme’s intervened with its overwhelming 7-1 decision.
The howling over the so-called Monsanto provision is all show and little substance. It's goal was to protect farmers against rogue anti-technology campaigners. Congress and the Obama Administration knew exactly what they were doing--and they did the right thing. Anti-GMO campaigners are clearly positioning themselves for another offensive against a new wave of GM crops and foods, with new pesticide tolerant grain varieties and a host of nutritionally enhanced grains, such as Golden Rice and drought resistant wheat, in their crosshairs. Yes, we should all be concerned.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
8861db0c59ffafbd4566157a557a8e13 | https://www.forbes.com/sites/jonentine/2013/04/09/forestry-labeling-war-turns-ugly-as-greenpeace-bungles-logging-industry-attack/ | Forestry Labeling War Turns Ugly As Greenpeace Bungles Logging Industry Attack | Forestry Labeling War Turns Ugly As Greenpeace Bungles Logging Industry Attack
As Jon Entine of the Genetic Literacy Project reports, Greenpeace’s embarrassing public apology last month for its botched attack against Canada’s largest forestry company and the Canadian Boreal Forest Agreement (CBFA) it helped birth—we’ll get to the details of that story soon enough— underscores the growing tensions over the forest certification programs designed to protect North America's woodlands.
Three years ago, executives from a variety of groups that can’t stand being in the room with one another—forest companies, corporations that use forestry products and anti-big business international non-government organizations (NGOs)—forged what was hailed as a breakthrough deal. The 2010 boreal agreement brought together nine environmental groups, many of them openly hostile to loggers, and 21 members of the Forest Products Association of Canada with a goal of increasing protections of 75 million hectares of forest in Canada. Essentially it was a truce between the logging industry and environmental groups, which have been at odds for decades.
Canada’s boreal forest, which remains largely untouched, rings the northern hemisphere, covering more than 60% of the country’s landmass. It’s dominated by coniferous forests, intermittent wetlands, small villages and wildlife. It’s an area of genuine contradictions: the boreal is a key source of forestry and mining products but also has a thriving, if limited, tourist industry, and the vast woodlands serve as one of the world’s primary carbon sinks. No wonder it has been the focus of the never-ending tensions between the Canadian government, aligned with commercial interests, usually at loggerheads with hard-core environmentalists, who oppose commercialization in principle regardless of the potential tradeoffs.
Under the agreement, the companies agreed to stop logging in certain areas, including valuable regions for caribou habitat, while the environmental groups agreed to back off from their anti-logging campaigns. They agreed to work together on the details of how to set aside valuable habitat for conservation while still allowing forestry companies limited harvesting in other areas.
It’s been an uneasy deal. This tension strikes a familiar chord in the classic battle between developers and protectionists, between those in government and industry who see nature as a resource that can be sustainably developed versus those who believe that vast land areas have inviolable “rights” and should not be subject to commercial use regardless of (or even in spite of) the potential economic bonanzas they might yield.
FSC v SFI
The Canadian boreal forestry mêlée is actually a skirmish in the an ongoing battle between the two major forestry eco-label schemes: the Forestry Stewardship Council (FSC), a favorite of campaigning greens, and the Sustainable Forestry Initiative (SFI), which was launched by a range of parties independent from but with the financial support of the American Forestry & Paper Association. The SFI has since broken off and currently operates as a fully-independent non-profit organization.
The two schemes have different roots and practices but converging philosophies—although one would never know that from listening to the high decibel rhetoric when forestry labeling initiatives are debated. Both plans arose in response to the 1992 Earth Summit in Rio de Janeiro that called for a focus on “sustainable” and “smart growth” development. While both are legally “voluntary”, meaning that they were not created by governments but by private firms, NGOs or coalitions of producers and consumers, in reality they have evolved into mandatory seals of approval in global markets. Key commercial actors, such as large retailers, traders or processing companies, now require their implementation.
Some voluntary standards are also referenced in government regulations. In fact, the US government is currently in the crosshairs of a contentious exchange between SFI and FSC supporters as to what the government should require in construction projects to meet federal sustainable guidelines. Many projects receiving taxpayer subsidies favor FSC-certified wood.
The FSC was formed by a coalition of advocacy groups including Rainforest Action Network, Friends of the Earth and World Wildlife Foundation. It now represents more than 800 groups, mostly outside the United States where it certifies more than 90% of its land. Organizations other than FSC certify 75% of North American forests.
More aggressive FSC members like Greenpeace, ForestEthics and the Dogwood Alliance see themselves as ‘white hats’—unabashedly and aggressively campaign focused, anti-corporate, opposed to fossil fuels at all costs and dismissive of the role of biotechnology and pesticide management in sustainable forestry. To them, SFI represents ‘black hat’ “Big Timber” and is nothing more than a “greenwashing scam.” They launch attack campaigns when they don’t get their way.
“Sometimes companies need a little encouragement,” brags ForestEthics on its website. “When companies refuse to change their harmful practices, ForestEthics holds them publicly accountable. We get creative with online and offline actions, including protests, websites, email campaigns and national advertisements. No corporation can afford to have its brand be synonymous with environmental destruction.”
Because it was cobbled together over years and is dominated by an anti-development bias, FSC’s rules vary across countries and regions. In fact, FSC labels do not disclose under which standards a wood product may have been certified. That means that product claims can’t be verified in many cases.
There are other anomalies, especially when it comes to set aside standards. For example, supposedly green Sweden has to protect only 5% of its forests while the United Kingdom has a 15% requirement; certain areas in the U.S. are required to restrict 10-25% of a given property. In countries without national standards, FSC permits certification authorities to use “interim” clear-cut limits and so-called “green up” requirements for new growth tree height that don’t necessarily reflect standards backed by the International Standards Organisation (ISO) and other global initiatives.
These anomalies irk some early FSC supporters, such as Simon Counsell, who has set up a website, FSC Watch, to monitor the problematic practices of the green group. The monitoring group recently attacked the FSC for its policies in Sweden, charging that there is a growing consensus that the “’Swedish model’ of forestry is failing to protect biodiversity, and old growth forests continue to be clear-cut, including those with FSC certification.”
The FSC is also controversial in the developing world. When it was first formed, there was widespread concern that pristine forests were being “raped” by developers in cahoots with corrupt governments. Its response was to set up a standard that denied certification to any operations undertaken on land converted after November 1994. Although the motive for the action was understandable, it’s proven a crude and unworkable tool. It has limited application in many countries pursuing reasonable policies, in effect favoring the developed world, which long ago started converting its usable timberlands. Understandably, many developing countries, like Indonesia, feel constrained by restrictions imposed on them by what they consider anti-development campaigners.
What about the SFI? Its founding in the mid-1990s led to immediate charges of cronyism. In 2005, it linked with European forestry groups, such as the Programme for the Endorsement of Forest Certification (PEFC), the world’s largest forest certification umbrella organization. While the FSC has over 30 different standards around the world —which makes it more fractured and confusing—SFI has one single standard.
LEED and the schism in the United States
Green groups remain adamant that the differences between the labeling initiatives are vast and unbridgeable, dismissing SFI as a “creature of vested interest”. One would think by listening to them that only businesses and loggers support SFI. In fact, groups like the Conservation Fund, National Association of Conservation Districts, National Council for Air and Stream Improvement and the Wildlife Society vouch for the certification program’s commitment to sustainability.
ForestEthics and the Dogwood Alliance have emerged as the FSC’s pit bulls, going so far as threatening and bullying companies they consider “weak links,” susceptible to consumer campaigns. They’ve targeted Kroger’s, KFC/Yum Brands, and even high-end brands such as Louis Vuitton for using SFI certified packaging and have convinced at least 21 prominent brands, including Kimberly Clark and Office Depot to phase out the SFI label and Target into adopting FSC-friendly policies.
Are there significant differences between the competing schemes? Independent observers see a convergence of standards as pressure for transparency on both groups has grown. Canada’s EcoLogo and TerraChoice, part of Underwriters Laboratories Global Network, each rate SFI and FSC identically. A United Nations joint commission recently concluded: “Over the years, many of the issues that previously divided the systems have become much less distinct. The largest certification systems now generally have the same structural programmatic requirements.”
University-based researchers who have scrutinized the two labeling programs have found few meaningful differences. For example North Carolina State professor Frederick Cubbage, North Carolina State University Forest Manager Joseph Cox and a team of researchers concluded that while SFI and FSC “have a slightly different focus, both prompt substantial, important changes in forest management to improve environmental, economic, and social outcomes.”
The convergence in standards has not stalled the politicization of the labeling competition. The two systems are currently going head to head in the US. The FSC has been entrenched because of the support from the US Green Building Council. USGBC adopted FSC standards in the mid-1990s, when it was the only game in town, for its LEED (Leadership in Energy and Environmental Design) rating system. It’s remained loyal because of fierce lobbying by green activists. Hundreds of cities and agencies in the US now mandate LEED standards, which means that FSC receives preferential treatment in building projects across the country.
This has created some unintended consequences. Because FSC label accounts for just one quarter of North American’s certified forests, three quarters of the wood from the continent’s certified forests are not eligible for LEED sourcing credits. As a result, LEED creates incentives for green building projects to import wood from overseas, resulting in the browning of the supply chain from excess carbon emissions generated by shipping costs. Nonetheless, activist greenies have dug in their heels, determined to do everything in their power to delegitimize competing systems.
The USGBC has never explained why only FSC forests can receive LEED credits. Michael Goergen, Jr., CEO of the Society of American Foresters, has criticized the USGBC for not including other standards, stating, “FSC or better is neither logical nor scientific, especially when it continues to reinforce misconceptions about third-party forest certification and responsible forest practices.”
Some believe LEED FSC-only framework has led to a loss of jobs. Union leader Bill Street of the International Association of Machinists stated that the “ideological driven ‘exclusivity’ of FSC means that systems such as LEED contribute to rural poverty and unemployment while simultaneously adding economic pressure to convert forest land to non-forest land uses.”
Growing concern about the rigidity of the LEED program has led to the emergence of a competing green building initiative in the US. Green Globes, run by the Green Building Initiative, recognizes the SFI and is now in the running along with FSC to be the preferred federal certification program. The Defense Department, one of the earliest LEED adopters and a huge source of new construction, is currently not allowed to spend public funds to achieve LEED’s “gold” or “platinum” certification because of questions about whether the added costs are justified by the benefits.
War breaks out in Canada
These schisms have played out in Canada, where Greenpeace launched its rogue campaign to bring down the fragile sustainability coalition, which it had only tepidly embraced. The CBFA clearly stipulates that Canadian forest managers can certify their practices to certificate programs run by FSC or by the Sustainable Forestry Initiative and its ally, the Canadian Standards Association. That has rankled the extremist NGOs, like ForestEthics and Greenpeace, which advocated a more adversarial stance, convinced that the SFI and the Forest Products Association of Canada was secretly undermining the agreement. They registered their disapproval of the CBFA from the beginning and have been threatening to undermine it. Finally, late last fall, they did just that.
In December, Greenpeace pulled the trigger, claiming it had proof from GPS-tagged video and pictures that one of the coalition industry members, Resolute Forest Products, was building logging roads in areas forbidden by the agreement. It released pictures it said were taken in August 2012 in Quebec’s Montagnes Blanches region, and it promptly resigned from the CBFA.
“This is a deal breaker for us," said Greenpeace spokeswoman Stephanie Goodwin. “There is no agreement left to uphold. With the boreal forest under threat, the only responsible decision for Greenpeace is to pursue other pathways to obtain results in the forest.”
Greenpeace’s action reflected the general sentiment of the radical wing of FSC supporters. They’ve long viewed the forestry industry as a whipping boy to demonstrate the clout of environmental greenmail—threatening corporations with public campaigns to get them to capitulate to their demands, which often include economic payoffs in the form of contributions to their campaigns. In essence, that’s how CBFA came into existence. Canadian foresters reached the truce only after a vicious “Do Not Buy” campaign launched against its members that claimed that the boreal was under imminent threat—although no independent Canadian government or international agency agreed with those hard-edged NGO allegations.
Unlike Kimberly-Clark and Quebec-based hardware and lumber retailer Rona, which buckled under harsh criticism and paid greenmail, Resolute fought back, providing documentation that the allegations were untrue. It supplied "concrete milestones" that it had reached for caribou protection and the implementation of best practices.
When its prey did not drop, Greenpeace reloaded and fired again. Spokesperson Shane Moffat trumpeted “Greenpeace’s science-based advocacy for responsible forestry” as the group issued a report, Boreal Alarm that threatened to wreak havoc on Resolute’s brand if it didn’t junk its logging practices, already approved under the terms of the CBFA coalition, in Quebec, Ontario and Manitoba.
Greenpeace and its key allies were surprised at Resolute’s resoluteness. But the company believed it was standing on firm factual ground and refused to be bullied. Finally in a huge embarrassment, on March 19, the activist group admitted it had bungled its “investigation” and that the unimpeachable videos and photos were just plain wrong. Even as it crowed about its 40 years of commitment to “best available science and research,” Greenpeace admitted it relied on “inaccurate maps” before launching its highly public and damaging attacks.
“We felt it was imperative to own up to our error,” said spokesperson Goodwin. Yet, Greenpeace continued to oppose the CBFA, saying it would have quit the organization even if it hadn’t fumbled its campaign.
What do we make of this? As Peter Foster points out in an analysis in the Financial Post, Greenpeace’s “take no prisoners” strategy is hardly unique—it mirrors the aggressive tractics used by the FSC in establishing itself as a powerful voice in the forestry eco label movement. Organizations that are openly hostile to industry and often ignorant of basic business practices demand payoffs from companies who usually fork over their “dues” in fear of being the target of highly public smear campaigns. Its greenmail—blackmail at the hands of so-called green campaigners.
That’s why it’s so important that there are choices when it comes to eco-labels, particularly in the forestry management area. Many FSC proponents are decidedly anti-development and opposed to controversial technologies, including sustainable biotechnology; the SFI does not resort to or encourage greenmail; it’s less confrontational, which clearly does not sit well its harshest critics, like aggressive environmental groups, such as Greenpeace.
Policies regarding the procurement of timber, use of building codes and what businesses can sell to their customers should be informed by facts and science, not scare tactics. Greenpeace’s deception is only the latest propaganda effort that has muddied rather than clarified the issues surrounding forestry practices. With a majority of forests lacking certification, we need common-sense incentives and more certification options to achieve sustainable forestry management goals. Consumers and the general public deserve much better than the disinformation campaigns that have shadowed this debate.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
6cdd965aa232182d70ca50de1d58ea8c | https://www.forbes.com/sites/jonentine/2013/04/18/national-resources-defense-council-nrdc-champions-shoddy-journalism-on-endocrine-active-chemicals/ | Natural Resources Defense Council (NRDC) Champions Shoddy Journalism on Endocrine Active Chemicals | Natural Resources Defense Council (NRDC) Champions Shoddy Journalism on Endocrine Active Chemicals
As Jon Entine of the Genetic Literacy Project reports, the NRDC is not exactly known for scientific nuance. So, there was little surprise when blogger Mae Wu took to the cyberwaves recently to plug an NBC Dateline story promoting the alleged dangers of "endocrine disrupting" chemicals.
According to Wu, we should all be shocked—yes shocked—that an NBC producer and her family found trace chemicals in their urine—microscopic amounts of BPA, triclosan and phthalates—all of which are approved and have been found to be not harmful as commonly used, according to the Environmental Protection Agency.
But that didn’t stop NBC and Wu from hyping what amounted to chemophobia. The scare tactic in this case was insinuating that the presence of common chemicals in our urine is dangerous. Journalists who do not understand risk analysis make this mistake all the time—ignorantly, more than likely, by NBC, as the reporter had no background in toxicology or science in general; but cynically by the NRDC, whose unstated mission it seems is to scare people about chemicals.
What NBC and Wu never disclosed in their respective reports is that the presence of chemicals in our urine is neither unusual nor, in almost all cases, anything to be concerned about. Miniscule traces of substances found in our urine can sometimes be meaningful but it’s usually just data noise—an artifact of high tech ultra sensitive biomonitoring techniques; the dose and exposure time, not the presence of a chemical, determines its toxicity.
NBC found tiny amounts of BPA, a chemical investigated and approved numerous times by the Environmental Protection Agency—most recently one year ago in a direct rebuke of an NRDC suit. The Centers for Disease Control and Prevention (CDC) had previously found traces of BPA in the urine of more than 90% of adults and children. That sounds frightening but not to a scientist. How scientists and journalists frame this often-stated fact is a good barometer of their understanding of toxicological risk—whether they genuinely wrestle with complex science or are mouthpieces, intentionally or not, for a predetermined, chemophobic perspective
Yes, we encounter BPA, phthalates and dozens of other common chemicals every day; and yes, they show up in our urine. It’s estimated that more than 160 chemicals can be detected in human urine, many of which are potentially dangerous if consumed at high enough doses over a long enough period of time. However, our liver regularly detoxifies chemicals from the environment and food, which is why we don’t keel over from drinking coffee, which has dozens of “killer” chemicals.
The CDC has repeatedly stated that while biomonitoring “can … help scientists plan and conduct research on exposure and health effects,” the presence of a chemical—whether BPA, triclosan, a phthalate or some other substance targeted by advocacy groups—does not mean that it’s harmful … or cause(s) an adverse health effect,” the CDC has written.
In the case of BPA, the FDA, reflecting the emerging scientific consensus that there is far more smoke than fire on the issue of so-called endocrine disruption, concluded, “[O]ral BPA administration results in rapid metabolism of BPA to an inactive [and therefore harmless] form.” The same mechanism is in place to detoxify many other so-called endocrine disrupting chemicals. The same is true for phthalates and triclosan, the other chemicals demonized by both NBC and the NRDC.
Phthalates in the crosshairs
Wu makes hash of the genuine scientific knowledge about all three chemicals. To dissect her shoddy reporting, I’ll just focus on one—the class called phthalates. Phthalates are plasticizers used to increase the flexibility and durability of a product. There are dozens of different types, but nine major ones used in thousands of consumer and industrial applications including, cosmetics, cables, flooring, medical devices and children’s vinyl backpacks and toys. NRDC’s website lumps them all together indiscriminately:
Phthalates are known to interfere with the production of male reproductive hormones in animals and likely to have similar effects in humans. Their effects in animal studies are well recognized and include lower testosterone levels, decreased sperm counts and lower sperm quality. Exposure to phthalates during development can also cause malformations of the male reproductive tract and testicular cancer. Young children and developing fetuses are most at risk.
A review of the evidence suggests that NRDC is far off the mark when it casually writes that phthalates are “likely to have similar effects in humans.” No study—not one—has shown that. Few chemicals on the market today have undergone as much scientific scrutiny as phthalate esters. Activists and industry groups pitted against each other in the debate have no shortage of studies they can invoke as ammunition. But one thing is clear: almost all of the evidence cited by anti-chemical campaigners is based on research linking phthalates to reproductive problems in rodents exposed to dose levels far higher than any human might face.
The NRDC’s misstatements about phthalates are compounded by the fact that, like many activist organizations, it willfully confuses different types of the chemical. Scientists draw distinctions between so-called low molecular weight ones—DEHP, BBP, DBP and DIBP— and high weight ones such as such as DINP, DIDP and DPHP. The low weight phthalates are slightly more volatile and can release minute amounts of off gasses—though not at toxic levels. As in the case of BPA, science bodies around the world have found low phthalates taken into the body are safely metabolized. Nonetheless, some regulatory bodies have voted in precautionary bans based entirely on rodent studies.
The long-term regulatory fate of the high phthalates is less sure. The chemical is ubiquitous, used in PVC/vinyl products as well as in hoses, shoe soles, sealings and many industrial processes. From a chemical perspective, high-weight phthalates are tightly bound, more stable and more resilient than low phthalates. They’ve been found safe time and again. Under the eye of activist groups and required by order of Congress, the US Consumer Product Safety Commission (CSPC) research organization known as the Chronic Hazard Advisory Panel or CHAP is expected to issue an updated scientific review soon. Pending the results of the CHAP review, there now exists a temporary ban on any child-care article that contains more than 0.1 percent of DINP, DIDP or DNOP.
It’s purely precautionary and unwarranted based on a death of evidence. The CDC offers a comprehensive list of links to a slew of scientific research on the chemical—none of which point to any serious human consequences. There is no cumulative buildup and the chemical is metabolized quickly by the body and excreted, noted Antonia M. Calafat of the CDC. “There is no consensus at present whether the phthalates are causing adverse health effects in humans,” added.
Two state of the art reports involving monitoring humans make hash of the NRDC’s fear mongering. A comprehensive study conducted in 2004 by the Children’s National Medical Center and the George Washington University School of Medicine showed no adverse effects in organ or sexual functioning in adolescent children exposed to phthalates as neonates. The same team evaluated infants in a 2010 study and reconfirmed the negative findings. Another more recent study has shown that even high levels of phthalates showed no effect on the genital development of marmosets, let alone humans—activist claims to the contrary.
Precautionary concerns prompted the Australian government to undertake yet another review released just last year of the most common high phthalate plasticizer, DINP. The National Industrial Chemicals Notification and Assessment Scheme (NICNAS) crunched data provided to them by the US CSPC, which had had prepared it for the CHAP analysis. NICNAS’s findings: “Current risk estimates do not indicate a health concern from exposure of children to DINP in toys and child care articles even at the highest (reasonable worst-case) exposure scenario considered.” That means even pregnant women or children are not in harms way. The scientists added: “No recommendations to public health risk management for the use of DINP (the most common “high phthalates”) in toys and child care articles are required based on the findings of this assessment.”
Trace phthalates not dangerous
Rather than reviewing the contextualized evidence about chemicals, the NRDC merely sneers. In her post, Wu also pumped a recently released University of Washington study finding micro-traces of phthalates in people who ate an organic diet, claiming it showed that “even when you do the ‘right’ thing, it is exceedingly difficult to eliminate phthalate exposure.“ That may be true, but as scientists stress time and again exposure does not equate with danger.
The report cited by the NRDC appeared in Nature in February. Pediatrics professor Sheela Sathyanarayana and her team measured the exposure of 10 families—an absurdly low number for a study of this kind—to phthalates and BPA. They supplied half the families with fresh, local and organic foods that didn’t come into contact with plastic during preparation or storage. They then measured phthalate and BPA metabolites in the participants’ urine—something scientists including at the CDC say is relatively useless in determining toxic exposure—before, during and after the diet changes.
The results were contradictory, as often happens with such small sample sizes. Bizarrely, the new organic diet substantially increased exposure to one type of phthalate, DEHP, suggesting the study was probably contaminated, raising questions about all the data and its conclusions. The researchers then tested the phthalate concentrations in the foods fed to the participants. They found that dairy products (butter, cream, milk, and cheese) and spices (ground cinnamon, cayenne pepper and ground coriander) had minute amounts of DEHP.
Besides the small sample size—one might need tens or even hundreds of subjects for the study to have much validity—the research was dogged by other problems. The results are an outlier when compared to other exposure studies. The use of spot urine samples with such small group sizes is not a rigorous approach. Additionally, as the authors note, they didn’t present the corrected values for creatinine, which otherwise allows researchers to correct for urine concentration—how dilute is it depending on how much water one drinks. That throws the entire study in doubt. They also didn’t analyze the diet in a rigorous way. The phthalates could be a biomarker of dietary fat consumption, which means they could have been eating a higher than typical amount of dietary fat or significantly more food.
So what should we take away from this research? According to the NRDC’s Sara Jannsen, another NRDC staffer who blogged when this data was first released, everyone should be on red alert. “As demonstrated in this study, the current levels of exposure are not safe,” she wrote.
I contacted the lead researchers to see whether her study agreed with the NRDC’s assessment. “The premise of our study was to see if we could reduce phthalate and BPA concentrations with either an educational handout or food replacement – not to examine health outcomes,” Sathyanarayana wrote me.
I then asked the professor about her use of the word “contamination” to describe the presence of phthalates in the food. Among scientists, the word is a technical term that points to intermingling, and does not speak to whether something is actually “polluting” or “dangerous,” which is the exaggerated way activists interpret that term.
Entine: “There appears to be an assumption running through [your article] that the presence of a chemical = contamination (used in the pejorative rather than the descriptive sense) = harm which then justifies regulatory intervention of some unstated nature. Is that in fact the argument you are making?”
Sathyanarayana: “No, I was not making that argument.”
So, here we have a problematic study hyped erroneously by science-challenged activists only to be picked up by science-absent television journalists only to be recycled back to the public by anti-chemical activists. At the center of this fiasco: the NRDC.
Any wonder we have a science education crisis in our country? Lets’ hope that the relentless anti-science drumbeat doesn’t result in a chemical regulatory crisis as well.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
8fa76d42a56cd15998f81d4ad43fff7d | https://www.forbes.com/sites/jonentine/2013/05/08/making-babies-selling-embryos-despite-ethic-concerns-address-genuine-needs/ | Making Babies: Selling Embryos, Despite Ethical Concerns, Addresses Genuine Needs | Making Babies: Selling Embryos, Despite Ethical Concerns, Addresses Genuine Needs
As the Genetic Literacy Project reports, the desire for infertile couples to have children is not a 'crime against nature,' as fringe 'responsible genetics' organizations argue.
The latest round of ethical contretemps is an intriguing April article in The New England Journal of Medicine, “Made-to-Order Embryos for Sale—A Brave New World?” which discusses—comprehensively and dispassionately—many of the concerns raised about embryo donations, whether gifted or for sale.
It was a response to the controversy touched off last fall by a report in the Los Angeles Times featuring a Davis, California for-profit embryo selling business that opened in 2010. That story stirred an ethical tizzy. “I am horrified by the thought of this,” the article quoted Andrew Vorzimer, a Los Angeles fertility lawyer, who voiced his belief that there was a huge and disturbing distinction between clinics that arranged “friendly” donations of embryos and ones that paired anonymous ones. “It is nothing short of the commodification of children.”
As if on cue, the NEJM report written by I. Glenn Cohen, a lawyer and Co-Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard University and Eli Y. Adashi, physician-scientist and Immediate Past Dean of Medicine and Biological Sciences at Brown University, brought out a familiar rebuke from the Center for Genetics and Society, which called the made-for-order’ IVF model “truly terrifying.”
Fist, let’s acknowledge the “eeeew” factor. The idea of selling any body part, let alone the combination of sperm and egg that leads to the formation of an embryo, can be disconcerting at first (or even second) thought. I would argue it’s far less provocative than aborting embryos (disclosure: I’m a strong abortion rights supporter), a key point CGS—known for its selective and highly idiosyncratic ethical choices—only obliquely addresses, and then in a way dismissive of abortion opponents.
The question that CGS fails to ask—though more mainstream ethics groups and abortion rights advocates address it all the time—is to what degree we “own” our own bodies. There are already several ways in which people can sell their bodily parts or products, ranging from livers to breast milk to bone marrow, and from blood to hair. In fact, the shortage of sperm and egg donors (in 2010, the last year for which data in the United States were available, fewer than 1000 embryo donations were recorded) has prompted robust discussions around the world about the potential benefits and challenges of a for-profit model.
Twenty-five years ago, I reported a story for NBC News about the nascent market for selling kidneys that was then budding in India. The practice was condemned by many people in the US, particularly ideological liberals, as a commodification of human life—even as thousands of people died each year on waiting lists because of a shortage of donated kidneys. Now the New York Times runs opinion pieces endorsing it. “People should not have to beg their friends and family for a kidney, nor die while waiting for one,” wrote Andrew Berger, a research analyst for GiveWell, a nonprofit that works closely with donors, last year.
There’s now a groundswell of support for the kidneys-for-sale model, particularly among those with kidney disease and their families, disgusted by the authoritarian views of so-called ethical gatekeepers that have intimidated lawmakers. Many bioethics organizations such as the Markkula Center for Applied Ethics at Santa Clara University—which 15 years ago objected to the buying and selling of kidneys—have now come full circle, favorably featuring those who believe a for-profit model has virtues. Center assistant director Miriam Schulman recently cited an article in the Kidney International Journal of the International Society of Nephrology, quoting AD Friedman and AL Friedman:
At least debating the controlled initiation and study of potential regimens that may increase donor kidney supply in the future in a scientifically and ethically responsible manner, is better than doing nothing more productive than complaining about the current system’s failure.
It’s clear from the Los Angeles Times article that California Conceptions, one of small number of for-profit embryo donation centers in the US, is serving a genuine need. For infertile couples, making babies is not cheap. The clinic appears to be thriving by providing a service to desperate infertile couples that cannot afford the astronomical price tag, which starts at $20,000 and can sometimes exceed $100,000, for the hit-and-miss adventure of multiple rounds of in vitro fertilization (IVF) using donated embryos. Dr. Ernez Zeringue offers his Davis, Ca. patients a reassuring guarantee: $9,800 or your money back.
Invoking the specter of eugenics, as CGS does, is unpersuasive. Forced eugenics, as practiced in the 1920s and 30s and supported most aggressively by “reformers” on the left and right (including the founder of Planned Parenthood) who championed sterilization laws, is clearly inappropriate. That’s not what’s on the table, however. We practice eugenics—which merely means ‘good genes’—all the time. Birth control, nonprofit embryo donations, pre-conception DNA screening tests, amniocentesis and even Match.com for baby-desirous singles who “select” potential mates based on targeted qualities, such as income and education, are all forms of eugenics—accepted and even celebrated by society. Abortion, widely supported by libertarians and political liberals, is a form of eugenics. The selective approbation attached to for-profit embryo donation comes across as just plain odd.
Cohen and Adashi, the NEJM article authors, offer thoughtful guidance through the ethical thicket of embryo donation. As they note, there is really only one critical difference between the current, expensive model that excludes the majority of people who need this service and the for-profit model: a legal framework. The sale of gametes—human eggs and sperm—is already legal and widespread around the world. The crucial issue, it would seem, is “the lack of clear legal guidance as to the parentage of the embryos in question.”
As the NEJM authors note, “[I]t may be difficult to claim that respect for personhood requires that the sale of embryos be prohibited at a time when parentally sanctioned embryonic destruction (with or without the generation of a human embryonic stem-cell line) is being practiced. Even if one believes that embryos deserve special respect not granted to gametes, it is far from clear why the sale of embryos to facilitate family building is any more contrary to that respect than the destruction thereof.”
Carping about or in some cases ignoring the failures of the current IVF system, seems the preferred choice for those opposed to even debating the benefits and challenges of a for-profit embryo market. Unless we as a society are determined to reserve the right of reproduction by infertile couples to the wealthy, we should welcome options.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
54eb77f88b8bcf61db93d2631837e568 | https://www.forbes.com/sites/jonentine/2013/05/21/organic-lobby-attacks-biotech-advances-obscures-own-sustainability-and-nutrition-doubletalk/ | Organic Lobby Attacks Biotech Advances, Obscures Own Sustainability And Nutrition Doubletalk | Organic Lobby Attacks Biotech Advances, Obscures Own Sustainability And Nutrition Doubletalk
As the Genetic Literacy Project reports, the organic industry has a direct commercial interest in sowing confusion and doubt about genetically engineered crops and food ingredients derived from them.
The anti-biotech disinformation efforts have been in full gear over the past week. The leading critic is the Organic Consumers Association led by Ronnie Cummins, with help from foodies like Michael Pollan and Mark Bittman. Recently, however, the OCA has been joined in its demonization campaign by what have been considered more mainstream organic lobbying groups.
The OCA has long targeted conventional agriculture, but it’s greatest ire has been reserved for biotech crops and foods. It’s home page features a litany of anti-science posts—mostly tirades written by Cummins or from well-known anti-biotech advocacy groups, usually with no reputable sources linked. The centerpiece of its current campaign is a guide titled “GMO Myths and Truths.” If only to ridicule them, OCA lists claims made by prominent scientists and endorsed by every major science organization of note in the world, including in Europe where politicians, but not scientists, have promoted bans and restrictions. According to OCA, these miscreants falsely believe genetically modified crops:
Are safe to eat and can be more nutritious than naturally bred crops Are strictly regulated for safety Increase crop yields Reduce pesticide use Benefit farmers and make their lives easier Bring economic benefits Benefit the environment Can help solve problems caused by climate change Reduce energy use Will help feed the world
According to every leading national and international science body, those statements are accurate, although the specifics of each claim are complicated and worthy of nuanced discussion. But one won’t find that kind of serious dialogue at the OCA site. Rather, it promotes blind anti-biotech advocacy and non-education, making the sweeping and false statement that “a large and growing body of scientific and other authoritative evidence shows that these claims are not true.
The OCA’s “authoritative evidence” is contained in “GMO Myths and Truths,” written by anti-biotech Earth Open Source, a British NGO that bills itself as committed to “collaborative approaches for sustainable foods.” Like OCA, it is a campaigning organization that has no scientists of any note on staff and no track record of serious science analysis or journalism.
From a science perspective OCA and its associated groups are a mess of a resource for anyone interested in a discussion of the data on the environmental and health impacts of biotech crops and food—let alone an honest appraisal of the benefits and limitations of organic agriculture. Unfortunately for public discourse, the OCA’s misinformation campaigns have become templates for other organic and foodie groups, obscure and mainstream.
Organic anti-biotech campaigners
Just last week, for example, the popular website Eating Local and Organic published a bizarre post purportedly addressing the question “What is GMO?” Its sole example: gene-altered tomatoes. It claims that the tomato was modified by process that causes tumors in rodents, killing them, implying that humans faced the same fate. “The foreign DNA ends up inside the good bacteria in our gut that’s responsible for digestion,” the unnamed writer for the site writes. “Have you wondered why so many people are needing pro-biotics these days? What about all of those Activia commercials to ‘make you regular?’”
This is fear mongering at its baldest; Health problems including the threat of gastronomic suicide must ultimately be caused by conventional agricultural products containing GMOs!
In this case, the anti-biotech argument focuses on a product that is not even in available—the Flavr Savr tomato. The source for this outrageous claims is a"documentary” called “The Future of Food” produced by an anti-biotech front group founded and funded by Cummins’ OCA. Like almost all anti-biotech claims by radical NGOs, it has a speck of truth embedded in exaggerations and flat out misstatements. In 1994, the Food and Drug Administration approved the first commercially grown genetically alteredfood for human consumption, a tomato called the Flavr Savr, which was altered to slow the ripening process, preventing it from softening, while still allowing it to retain its natural color, flavor and nutritional value. The tomato was evaluated extensively by independent and industry scientists, as well as by the Food and Drug Administration, and found healthful and environmentally benign.
To get those characteristics into the tomato and into transgenic crops, geneticists use what’s called a “promoter”, which is a nucleotide sequence that acts like a motor driving production of a genes’ message. The article on the organic site—and similar claims on hundreds of other foodie and anti-biotech web pages—makes the alarmist claim that something weird and dangerous must be going on because the promoter is carried by a virus and in this instance because it is commonly inflects cauliflowers (known as the mosaic virus or CaMV).
In fact, viral vectors are key to bioengineering; they have been rendered noninfectious, are just transporters and are utterly harmless. The FDA extensively evaluated the Flavr Savr tomato, concluding that it “is as safe as tomatoes bred by conventional means” and that the process used to make it is “safe for use as a processing aid in the development of new varieties of tomato, rapeseed oil and cotton intended for food use.”
Despite the new tomato’s obvious benefits, activists campaigned against it, calling it a “mutant veggie.” It never caught on and was eventually discontinued. A safe and nutritious food was removed from the market by a disinformation campaign. Even today, palpably false accusations about the tomato’s safety are re-circulated by Cummins and by other anti-biotech campaigners, such as Jeffrey Smith, who falsely claim that the Flavr Savr tomato or its ingredients killed rats in lab tests. (In fact, in lab tests, some rats fed an exclusive diet of tomatoes did show esophageal lesions, which speaks to the acidity of tomatoes; there was certainly no evidence of toxicity as Cummins and Smith have implied).
A sad twist in the fevered efforts of anti-biotech advocates is the degrading of the integrity and credibility of more mainstream organic groups. Although the leaders of the Organic Trade Association sometimes distance themselves from Cummins’ anti-science guerilla tactics, they now swim in the same cesspool of pseudoscience. The OTA no longer publicly rebukes his outrageous statements, and in fact they end up circulating many of his insinuations or outright falsehoods.
The OTA’s primary vector is its regular “consumer survey” that trumpets the anti-biotech beliefs of its green washed supporters. It’s latest tracking poll, released in March, indicated “32 percent of parents who learned about GMOs in the news are significantly more likely to increase their organic purchases.” Those numbers have risen in recent years—not because of any new data suggesting biotech crops are harmful. Rather, it’s the result of scare campaigns, which the OTA, like its less reputable cousin OCA, encourages.
“Results from the latest consumer survey conducted for the Organic Trade Association (OTA) reveal that as U.S. families are becoming increasingly aware of the presence of unlabeled genetically modified organisms (GMOs) in foods in the marketplace, they turn to organic as the food labeled by law to not have been made with genetically engineered ingredients,” the OTA boasts in its press release. That news release drove hundreds of news articles and thousands of posts that ricocheted through the Internet echo chamber.
Organic crops no more nutritious and less sustainable
The organic industry has been growing in part by promoting the false claim that organic products are more nutritious than conventional varieties. According to its latest survey, “Families continue to cite their desire for healthful options, especially for their children, in choosing organic foods,” the survey notes. That’s green washing; study after study, going back to the 1960s, has found that organic foods are neither safer nor more nutritious alternatives to conventionally grown crops. The most recent, considered definitive—and in line with all major past studies—examined 237 scientific reports over the past 50 years evaluating the nutrient content of organic and conventional foods. Researchers at Stanford University concluded that organically and conventionally produced foodstuffs are comparable in their nutrient content.
Organic supporters also ignore the sustainability contradictions at the heart of their passion. Although organic farming may be environmentally benign when producing small quantities of crops for regional markets, they are environmentally precarious on a large scale. In 2008, as part of its Census of Agriculture, the USDA conducted the Organic Production Survey, the largest every study of organic farming yields. As Ramez Naam has written in his recent book on sustainability, The Infinite Resource, it takes one and a half times to two times as much land in the U.S. to grow food organically than it does to grow food via conventional methods. That, in turn, puts more pressure on farmers around the world to grow more. In the developing world, that often means slashing and burning forest into farmland, a process that emits a tremendous amount of carbon dioxide into the atmosphere and harms both the water cycle and species that live in forests.”
In other words, although organic farming might require the use of fewer pesticides (genetically modified crops also use far less pesticides, of course), its broader impact could be environmentally disastrous: it would require more acres cut out from virgin woodlands and an estimated 5 to 6 billion additional head of cattle to produce enough manure to fertilize that farmland—and there are only about 1.3 billion cattle in the world today.
“Clearing that much land would produce around 500 billion tons of CO2, or almost as much as the total cumulative CO2 emissions of the world thus far,” Naam summarizes. “And the cattle needed to fertilize that land would produce far more greenhouse gasses, in the form of methane, than all of agriculture does today.
Organic activists led by the OCA, OTA and foodies may believe they are on the side of the angels, but their campaigns are often ignorant of science, selfishly focused on the desires of the affluent, and ultimately destructive. Their policies also drive up costs to many consumers who cannot afford the price premiums charged by the organic industry for what is largely, in scientific terms, a ‘feel good’ purchase. Anything that creates doubt or concern about foods that may have genetically modified ingredients in them, or are labeled “May Contain Genetically Engineered Ingredients” is doing the organic industry’s marketing work for it and driving confused consumers to higher-priced products. Even worse, their efforts if emulated in the developing world could seriously damage world food security, resulting in an increase in malnutrition and even premature deaths.
Follow Jon on Twitter
More on agricultural and human biotechnology issues at the Genetic Literacy Project
Jon Entine, executive director of the GLP, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
57de0708a642382a08320ad17d05ec04 | https://www.forbes.com/sites/jonentine/2013/05/30/angelina-jolies-breast-cancer-stirs-debate-over-mandatory-screening/ | Angelina Jolie's Breast Cancer Stirs Debate Over Mandatory Screening | Angelina Jolie's Breast Cancer Stirs Debate Over Mandatory Screening
According to the Genetic Literacy Project, the publicity surrounding the decision by Angelina Jolie to undergo a double mastectomy has raised a disconcerting question—could genetic testing actually be harmful to your health?
A genetic screening test determined that Jolie carried a genetic mutation that elevated her chances of developing breast or ovarian cancer. She has one of three mutations, specifically BRACA 1, linked to ancient Jewish communities. I can relate: My two sisters and I all carry one of these genetic mistakes (in our case, it’s BRCA2). I face a higher likelihood of contracting male breast cancer, as well as ocular and prostate cancers. Many of my family members, including my mother, developed breast or ovarian cancers. My mother died as a consequence of carrying this mutation. My young, female family members worry whether they should have their breasts and ovaries removed as a precaution.
It’s estimated that one in forty-three Jews (about 2.5%) carry one of these three genetic faults. Because humans move around and fool around, the BRCA mutations are also found in non-Jews like Jolie. It’s estimated that overall, one in nine women will develop breast cancer in their lifetime—although only a fraction can be definitively linked to a specific mutation like BRCA1 or BRCA2.
The Jolie revelation has sparked a welcomed public discussion about the benefits of testing. But it’s also raised questions about the need for counseling that often accompanies genetic screening—and calls by some to make counseling mandatory, regardless of cost or effectiveness.
The costs of mandated genetic counseling
I found out I was a potential carrier for one of the three “Jewish” breast cancer mutations in 2001, when I received a terrifying call from my oldest sister: she had been diagnosed with breast cancer. Gratefully, she defeated the cancer but the issue of genetic screening—it’s costs and implications—took on personal significance.
After my sister’s diagnosis, I went in for my own screen. Even though the test costs just a few dollars, the elaborate process—blood test, counseling, follow-up consultations—cost thousands. That’s because scientists at Myriad Genetics had isolated and patented the two gene sequences used to diagnose these most common forms of breast and ovarian cancer. I couldn’t be screened for these mutations without agreeing to this diagnosis and counseling regimen.
Luckily, because of my family history in battling this disease, my insurance company paid for most of the cost of the Myriad tests. But my out of pocket expenses still ran into the hundreds of dollars. Most other people in similar circumstances are not as fortunate. Many people don’t have insurance and many others who do are denied reimbursement unless they have a family history of breast cancer.
Myriad is at the center of a debate over whether companies should be allowed to patent human genes. The Supreme Court is expected to rule any day now on a challenge to Myriad’s patents; the justices heard oral arguments in April. It is a complicated and contentious case, but not one I’m addressing here. My focus is on the test itself: should it be readily available to a wider population? And if so, should those who take it be required to undergo expensive counseling that in my case drove the cost from a few dollars to a few thousand dollars?
The first question is easily answerable. In the past few years, a number of companies have developed inexpensive screening tests for prospective parents. These prenatal screens can determine the likelihood of their offspring developing so-called Mendelian disorders—diseases (like those linked to BRCA1 and BRCA2) caused by mutations in single genes. State and federal agencies and insurance companies are gradually adopting these tests; after all, an investment of a few hundred dollars on a test could prevent a disease that could cost hundreds of thousands of dollars down the road. There is some resistance to these tests, just like amniocentesis faced years ago; but the medical establishment and most people will inevitably embrace them.
Assuming these tests become widespread, will the government step in and require that inexpensive screens be linked to mandated counseling—which can sometimes cost 10 times more than the test itself? This is already the case for those who seek out a test for BRCA1 or BRACA2. Myriad requires counseling with the fees split between the testing company and the clinic or hospital that administers the test. Those who can’t afford the mandatory counseling are out of luck. Barring Myriad’s hardship pass, they will have to choose between their economic well-being and their family’s health.
Direct-to-consumer tests provide a valuable service for a low price
A few years after my positive BRCA test, I faced that very dilemma. I had other family members under my care that needed a test, but temporarily had no insurance coverage, and I could not afford the $3000+ price tag for the test and counseling. It was an excruciating time. Luckily, in 2010, a US District Court temporarily invalidated Myriad’s patent rights. For two years, until that ruling was reversed (setting the stage for the current Supreme Court decision), other companies jumped in to offer the test at drastically reduced rates.
I had a family member screened by the personal genomics company 23andMe as a free add-on to its then-$99 broad-based genetic test, which covers more than 240 health conditions and also reports on ancestry. They could test for BRCA mutations as a “freebie” because the test—now done accurately by swabbing the inside one’s mouth—costs almost nothing and they did not require extensive and expensive genetic counseling.
Are counseling sessions necessary? Not always. In my case, the answer was “no.” I was well versed on the test and its consequences and found the counseling worthless and time-consuming. However, for many people unfamiliar with the disease and how to interpret a genetic test, counseling could be very useful.
The sticking point here is choice. Should everyone be required to undergo counseling as part of screenings, as is now the case with the Myriad tests? If such a requirement became widespread it could economically damage or force the shut down of entrepreneurial genetic screening companies that are offering a wide variety of tests at very low costs to the public. The medical establishment is lobbying hard to make counseling mandatory, which could end the era of inexpensive direct access tests.
This developing drama played out in a recent article by Cheryl Platzman Weinstock in Oprah, reposted on Huffington Post last week. The original title, in the magazine, is inflammatory: “When Genetic Testing Can Be Dangerous to Your Health.” It’s in part a powerful personal story about the consequences of DNA screening misdiagnoses, but it’s positioned as an argument for mandatory screening. It opens with the anecdote of one woman, whose test for breast cancer came back positive. She had a mastectomy only later to find out that the gene variant she had was of “uncertain significance,” and in fact the surgery had not been necessary.
The article is well meaning—after all no one should assume that any diagnosis is definitive, and second opinions are always suggested when major surgery is in the offing. But there were some gaping holes in the story. Readers were not told of the details of the original diagnosis or why the original test was deficient. “Acosta’s ordeal highlights why it’s so important that clinicians be adequately schooled in genetics before they offer testing to patients,” Weinstock writes. “Until the majority of doctors catch up with the science, meeting with a genetic counselor or a clinician with training in the field is your safest option.”
Misguided establishment?
Weinstock’s recommendations—let’s have more and better genetic counseling—are obviously wise. Let’s educate more clinicians and provide genetic counselors, when appropriate. But should we require counseling, which many doctors now argue for?
Large biotech companies, such as Myriad and Genentech, that offer proprietary tests through physicians at far higher prices than those offered by startups, are trying to lock in the current system that by and large requires counseling, regardless of cost. They’ve filed “citizen’s petitions,” which encourage the Food and Drug Administration to regulate smaller competitors out of the market. They are joined in this effort by the American Medical Association, which has urged the FDA to mandate that a physician register a billing event every time patients want to view their own genomic profile—a backdoor way to require counseling. Why the counseling requirement? The AMA and a few members of Congress have suggested that consumers might, in the words of one congressman, “jump off a building” after finding out that they might have a genetic predisposition to a disease. Supposedly, a counselor would preclude such spontaneous meltdowns.
Despite numerous studies debunking the notion that individuals have severe negative psychological reactions to their own genetic information—including a large peer-reviewed report in the New England Journal of Medicine—the AMA holds by its belief that citizens cannot be trusted with direct access to their own DNA data. In taking this stance, the AMA is out of sync with the younger, more technologically savvy physicians of the Health 2.0 movement who believe that one way to keep health costs down is to empower patients by giving them access to their own medical records, including their own genome.
Personal genomics is revelatory and scary—and potentially of great personal and medical importance. Let’s hope that we don’t overreact to the powerful tools now becoming available by setting restrictions on their availability. Genetic counseling should be a choice—not an expensive mandate that will prevent many consumers access to critical knowledge about their health and history.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
d1d53e505951e971e89e53c752005054 | https://www.forbes.com/sites/jonentine/2013/08/22/will-washington-politics-kill-the-us-energy-revival-and-shale-gas-revolution/ | Will Washington Politics Kill The US Energy Revival And Shale Gas Revolution? | Will Washington Politics Kill The US Energy Revival And Shale Gas Revolution?
New Jersey is emerging as a surprise new battlefield in the debate over shale gas and fracking.
Although there is no gas yet being mined in the state, it’s been one of the major beneficiaries from the economic revival that has rippled across the country.
New Jersey is home to the worlds’ largest industrial gas company, Linde, which little more than a decade ago was facing a bleak future. It supplies carbon dioxide and nitrogen to companies that are developing shale through waterless hydraulic fracturing. Since the fracking technique was perfected, Linde has added hundreds of new jobs and now employs more than 1,000 people. This is just one story among many around the country, as once moribund industrial manufacturing, petrochemical and steel companies have experienced a business resurgence. That’s all happened under the radar—one of the many unexpected benefits as the combination of fracking and horizontal drilling has freed up formerly untapped gas deposits.
Linde’s success—and a surprise finding that New Jersey may have reserves of its own—has suddenly brought the fracking controversy front and center in the state. It has had a ban on fracking, instituted years ago, but it expired in January. There was no push to reenact it because there was no gas to be mined in the state; or at least that’s what was thought. It turns out that a shale gas formation extends from Trenton to the northern reaches of the state—enough, experts now say, to supply New Jersey households with five years of energy.
The Newark Formation as it’s known is relatively small compared to the vast Marcellus reserves in neighboring Pennsylvania and New York. But its discovery raises the possibility of a new flash point in the ongoing ‘war over fracking’. Opponents are pulling out all stops, deriding the economic gains and hyping the alleged dangers even as new independent studies suggest that fracking, while not without environmental challenges, is no more problematic than traditional mining, and its record is improving dramatically.
The economic benefits from the increased supply of shale gas in the Northeast are tangible and growing. Home and industrial energy costs are at an all time low. But while Pennsylvania has embraced its reserves, adding an estimated 250,000 shale related jobs in recent years, New York is entering its sixth year of a fracking moratorium. Although the science community has urged that the moratorium be lifted, Governor Cuomo now finds himself trying to deal with a radioactive issue driven by dedicated ideologues. With the 2016 presidential election in his sites, the Governor now says he will make a ‘final’ determination by the 2014 election—the latest dubious promise after a string of missed deadlines.
The debate over shale gas has intensified in recent weeks in the wake of the release of activist filmmaker Josh Fox’s latest anti-shale gas ‘docu-prop,’ Gasland II. Like the original Gasland film, it revolves around iconic images of homeowners setting ablaze or otherwise getting sick from hydrocarbon-tainted tapwater—brazenly implying that it’s caused by methane and other chemicals leaked as the result of hydraulic fracturing.
What Fox does not tell you is that methane leaks naturally at the locations where he filmed. Pictures of flaming faucets and springs caused by leaking methane have been around for decades, well before fracking arrived on the scene—one of dozens of factual missteps in Fox’s films. In fact, as NPR has reported, the Pennsylvania Department of Environmental Protections explicitly investigated and rejected Fox’s allegation, reprised in Gasland II, that flaming water in Dimock, Pennsylvania was the result of fracked wells.
Among his other claims, Fox contends, erroneously, that the oil and gas industry is exempt from the federal clean air and clean water acts (the so-called Halliburton Loophole, a charge found to be fallacious). Many of Fox’s more outlandish allegations are addressed in FrackNation, a documentary directed by Phelim McAleer, who raised money for the film through crowd sourcing fundraising site Kickstarter.
“Flammable water is a great story.” McAleer has said. “Flammable water caused by an evil oil company—an even better story. But when you examine it, it’s just not true. I think there are journalists that are ideologically inclined to disbelieving everything an oil company says. So mix in the desire to tell a great story with the desire to believe environmentalists always tell the truth and journalism has not come out well in the fracking movement.”
DOE finds fracking innocent
Within days of the July airing of Gasland II on HBO, the Department of Energy released a landmark federal study on hydraulic fracturing that eviscerates a central premise of Fox’s movie and the anti-fracking movement. In the first independent assessment of whether shale gas drilling poses a toxic threat to groundwater, DOE researchers monitored wells in western Pennsylvania for a full year. They tagged fracking chemicals with unique markers and found that none migrated from gas bores or man-made fractures into water supplies.
“This is good news,” said Duke University scientist Robert Jackson, who was not involved with the study. Aquifers are usually found at depths of less than 500 feet. The researchers found no evidence of fracking fluids at 5,000 feet or less, indicating the fracking process comes with a huge safety cushion, at least in Pennsylvania.
Anti-shale gas campaigners have built their case around allegations that the mix of chemicals used in the fracking process—almost all water (90 percent) and sand (9.5%)—is a toxic time bomb ready to blow and pollute water supplies across the nation. Jackson, respected for his independence, has overseen numerous studies at fracking sites around the country, and has yet to find any evidence of contamination by fracking fluids.
The study also disposed of another oft expressed fear pushed by Fox and radical environmental justice groups: seismic monitoring determined that fractures subject to earthquakes got nowhere near aquifers or the surface, as they had claimed was likely.
While such reassuring findings are unlikely to quell protests, more responsible environmentalists, such as Scott Anderson, a drilling expert with the Environmental Defense Fund, found the study reassuring. “Very few people think that fracking at significant depths routinely leads to water contamination,” Anderson told the Associated Press.
In the wake of report’s release and in a rebuff of protestors, Energy Secretary Ernest Moniz reaffirmed the Obama Administration’s long-stated position that hydraulic fracturing is safe as practiced.
“I still have not seen any evidence of fracking per se contaminating ground water,” Moniz told reporters at a breakfast briefing. He reaffirmed the White House position that natural gas offers a “bridge to a low carbon future,” as it releases about one-third to one-half as much carbon dioxide as fossil fuels.
The latest study raises questions about why the Environmental Protection Agency has been involved in testing that is more effective when conducted by state authorities who are far more familiar with the geological characteristics of the formations in their states . Over the past 15 months, the EPA has:
Closed an investigation into groundwater pollution in Dimock, Pa., saying the level of contamination was below federal safety triggers; Abandoned its assertion that a driller in Parker County, Texas was responsible for methane gas bubbling up in residents’ faucets Terminated its Pavillion, Wyoming, fracking pollution investigation turning future monitoring over to the state. Sharply revised downward a 2010 estimate showing that leaking gas from wells and pipelines was contributing to climate change, crediting better pollution controls by industry
Administration plans to regulate fracking in flux
The string of EPA missteps—the agency has consistently over hyped the danger of shale gas extraction, only to retreat as more solid evidence emerged—raises questions about whether the federal government should assume more oversight of fracking nationwide—the position pushed by Fox and activists, and possibly under consideration by the administration.
The EPA has said it will release a study next year on fracking and groundwater, raising speculation that the government will use it as a pretext for imposing national guidelines. It’s unclear how EPA’s decisions to abandon fracking and groundwater investigations in Wyoming, Texas and Pennsylvania will weigh on the agency’s broader probe.
The industry has long contended, and recent EPA blunders seem to support, that states are in a far better position than the federal government to oversee regionally idiosyncratic fracking operations.
“We should be encouraging production… not stifling it,” said Representative Doc Hastings, a Washington Republican and chairman of the House Natural Resources Committee in hearings last month on one of several bills aimed at ensuring that oversight of oil and gas fracking would be left to the states.
This particular bill—Protecting States’ Rights to Promote American Energy Security Act (H.R. 2728)—and others are likely to come up for a full House vote when Congress returns from its recess in September.
Regardless of the vote, the Senate is unlikely to follow the House lead on this, which may leave resolution in the hands of administration officials. Signals are mixed. Moniz told Platt’s Energy Week in June that “in the end there has to be a very, very strong state role there” to oversee the shale and oil gas boom.
The wild card may be the Interior’s Bureau of Land Management. When she assumed her new post last spring, Interior Secretary Sally Jewell indicated that she had no desire to press for additional federal oversight of fracking. “One thing that’s clear to me from my own experiences is that one size doesn’t fit all,” Jewell said.
But shortly thereafter, the Interior rolled out an ambitious new plan to expand fracking oversight on federal lands. The Administration would require companies to more fully disclose chemicals used in drilling, have a water-management plan for fluids that flow back to the surface and take steps to assure wellbore integrity and prevent toxic fluids from leaking into groundwater.
Each of these issues is addressed by state regulations, which are targeted to the unique geology of individual formations, leaving in doubt whether this is a step forward or backward. Meanwhile, environmental groups blasted Interior’s proposal for being weak and supposedly leaving too much control in state hands.
Senator Ron Wyden (D-Or), the chairman of the Energy and Natural Resources Committee, and an occasional critic of the oil and gas industry, has begun floating a proposal that would put the federal government’s nose under the regulation tent. In a recent speech at a Bipartisan Policy Center event, he suggested maintaining states jurisdiction over “below-ground” gas and oil production while giving the federal government more of a role overseeing “above-ground” activity and creating uniform rules on spill reporting and chemical-disclosure requirements.
According to industry officials—and many independent experts as well—state laws have evolved over the years to respond to the unique legal structure and doctrines, environmental conditions, geology, topography, climate and community sensitivities specific to each state. In many cases, in response to local conditions, state regs are actually tighter than proposed federal rules.
In the end, as Bryan Walsh at Time, has noted, the issue boils down to “trust.” Can the public rely on the states and the federal government to properly oversee a technology about which many remain suspicious?
But trust goes both ways. Considering how politicized this issue has become, can the public be sure that Washington will resist the pressures and environmental interest group lobbying that often drives policy into an ideological ditch? If not, the shale gas revolution and the energy and economic boom that it has sparked could easily be derailed.
More on science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
30eb2ca9d5d67b13c5643a8206805233 | https://www.forbes.com/sites/jonentine/2013/09/18/university-of-texas-environmental-defense-fund-shale-gas-study-unmasks-politics-of-anti-fracking-activist-cornell-scientists/ | University Of Texas-Environmental Defense Fund Shale Gas Study Unmasks Politics Of Anti-Fracking Activist Cornell Scientists | University Of Texas-Environmental Defense Fund Shale Gas Study Unmasks Politics Of Anti-Fracking Activist Cornell Scientists
One of the central tenets of anti-shale gas activists—claims that carbon pollution from methane leaked during the hydraulic fracturing extraction process makes natural gas more polluting than coal—took another, likely fatal, hit this week.
A University of Texas-Austin study released Monday found that methane emissions from new wells being prepared for production, a process known as completion, captured 99% of the escaping methane—on average 97% lower than estimates released in 2011 by the Environmental Protection Agency. It is the most comprehensive shale gas emissions study ever undertaken on methane leakage, covering 190 well pads around the United States. Methane is a potent greenhouse gas, so leaks could theoretically wipe out the documented climate benefits with respect to reduced carbon emissions of natural gas, a comparatively clean fossil fuel.
Energy experts and environmentalists celebrated the finding that almost all the escaping methane could be captured by state of the art equipment. “Can we control it? Thanks to new EPA regulations coming online, the answer to that is good news,” Eric Pooley, a senior vice president at the Environmental Defense Fund, told the New York Times.
“We were surprised at that finding, yes,” I was told by Steven Hamburg, chief scientist for the EDF , who coordinated the study, one of 16 studies EDF is overseeing and expects to be released over the next 15 months.
The findings were immediately criticized—trashed is a more accurate word—by Robert Howarth and Anthony Ingraffea, two Cornell University scientists whose study released two years ago claimed catastrophic levels of methane were being leaked by fracking operations. Howarth also has claimed that fracking could push the world over a tipping point, sending temperatures irreversibly higher. The once-obscure professor immediately became the go-to expert for anti-fracking journalists and lawmakers, even though a slew of experts discredited his research.
The polluting impact of shale gas revolves around one key issue: how much methane gas is released during extraction and across the supply chain. Methane has more short-term global-warming impact than any other fossil fuel. Howarth emerged from academic nowhere when he claimed shale-gas wells leak like sieves, venting methane half the time, spewing 7 percent to 8 percent of reserves into the atmosphere.
“That’s absurd,” said Michael Levi, director of the Program on Energy Security and Climate Change at the Council of Foreign Relations, when Howarth came out with his projections in early 2011. “Most methane gas is either ‘delivered to sales’ with no leakage or it’s burnt off through flaring, which diminishes its greenhouse impact.”
As renowned Cornell geologist Lawrence Cathles convincingly argued, Howarth appeared to have deliberately used 2007 data in his study, a century ago by shale gas technology standards, which bumped his estimates by 10-20 times—at least. US Energy Department, University of Maryland, Massachusetts Institute of Technology, a Sierra Club-backed Carnegie Mellon University study and the Worldwatch Institute each reviewed the methane leakage issue and rejected Howarth’s findings as vastly inflated.
The UT-EDF peer reviewed report, which went through additional vetting far beyond a typical study, was published Monday in The Proceedings of the National Academy of Sciences, one of the world’s most prestigious science journals. On Monday, Howarth’s co-author Anthony Ingraffea praised the study, calling it a “useful start” to answering the question over how methane might be emitted. The UT scientists also found that missions from pneumatic devices at well sites represented a sizable percentage of leaks and were at least 70% higher than EPA’s estimates. “In total,” Hamburg said, “the UT study found a leak rate equal to the EPA’s most recent”—which is far lower than Howarth’s guestimate.
But by Tuesday, Ingraffea had reversed course, joining two groups of anti-shale gas scientists in issuing a scathing press release: “Fracking Methane Leakage Study Financed by Gas Industry with Partner, EDF, Deeply Flawed”. Anti-fracking activist websites like Desmogblog took their cue from the release and launched an offensive, trying to frame EDF as shilling for the industry.
Hamburg defended the study as robust and state of the art. “It was totally independent,” he told me. Nine petroleum companies provided access to their sites, he said, but had no involvement in the data collection or analysis.
“The study team requested of specific companies a list of all completions being done within a specific time frame—not too far in the future—for a specific geography,” he said, outlining the study process. “They selected the completion they would measure; once on site they requested a list of all wells within a specific distance; they then selected from the list what wells they measured. All the companies have positively stated they gave the team all sites that met the study team criteria. The sample was an unbiased sample of the wells of the nine participating companies, which collectively drilled roughly half of all natural gas wells in 2011.” Those figures rebuke the Howarth-Ingraffea claim in its orchestrated PR responses that UT cherry-picked a “very small sample size.”
And unlike the Howarth study, which just reviewed EPA data and relied on estimates and hypotheticals, “the researchers actually went to the sites. What makes the study unique is that we were able to get detailed measurements. Flyovers are a critical tool that we are also deploying, but they are not a substitute for getting data at the source of well completions and makeovers if our goal is to understand where leaks are occurring and mitigate emissions.”
The insinuation that EDF is corrupt and in bed with industry, while Howarth and Ingraffea are independent researchers, is belied by the facts. Some of the anti-fracking research at Cornell, including Howarth’s modest burst of now discredited scholarship, is possible because of the generous support of the Park family of Ithaca, through its well-endowed trust, the Park Foundation. [NOTE: Professor Howarth contends that "most of my research in this area has been funded by internal funds at Cornell, and the research has not been dependent on the Park money."] The foundation funded the totemic movies of the anti-shale gas movement, Gasland and Gasland II, the cinematically engaging but scientifically questionable Josh Fox documentaries aired on HBO. All told, it’s poured millions of dollars into anti-fracking ventures in recent years.
It’s more than likely Park money is funding organizations behind the coordinated response campaign to the Texas study and the attempt to smear the Environmental Defense Fund. Howarth has established money ties to Park. Two years ago in an interview for an investigative story on Park and Howarth for Ethical Corporation, the Cornell professor blurted out to me that he was recruited by a Park Foundation family member who thought a university study criticizing fracking and challenging the ‘green credentials’ of shale gas would advance the cause. [NOTE: Professor Howarth denies having mad such comments. "I was not recruited by a Park Foundation family member, and have only once in my life ever spoken to a Park family member," he contends.]
At that time, Howarth and his wife, Roxanne Marino, a biochemist at Cornell and partner at his lab, were well-known long-time environmental activists and outspoken opponents of developing shale gas reserves. He was given $35,000 of Park’s anti-fracking money—before beginning his research. [NOTE: Professor Howarth claims he "started the research 12 months before taking any funding from the Park Foundation.] The transaction has at least the appearance that Howarth had a preconceived conclusion and that he may have cooked the speculative data that he chose to use to conform with his anti-fracking ideology.
“I’m a scientist and I let the data do the talking,” EDF’s Hamburg told me. “This was a first class study. We have 90 collaborators around the country contributing to the 16 studies that have been undertaken. I am confident that the results of this first UT study will be born out.”
[EDITOR'S NOTE: This post was updated on November 5, 2013 to add statements from Mr. Howarth.]
More on genetics and science literacy at the Genetic Literacy Projec
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
6318f06eb8be1613d30786242289fd1a | https://www.forbes.com/sites/jonentine/2014/04/30/infographic-on-4-ways-to-breed-crops-by-scrambling-genes-youll-be-surprised-which-ones-are-regulated/ | Infographic On 4 Ways To Breed Crops By Scrambling Genes -- You'll Be Surprised Which Ones Are Regulated | Infographic On 4 Ways To Breed Crops By Scrambling Genes -- You'll Be Surprised Which Ones Are Regulated
Why do scientists say that genetic engineering of crops is just the "latest chapter" in 10,000 years of high-tech agriculture? Or that genetic engineering is just a more precise way to breed plants compared to conventional breeding? Plants swap genes even without the help of human beings when they reproduce sexually, and our ancestors guided the process to develop crops suitable for agriculture. This Genetic Literacy Project infographic created by the GLP's agriculture editor XiaoZhi Lim presents four main ways in which crops have been genetically modified by humans: traditional breeding, mutagenesis, RNA interference and transgenics.
(Click to view high resolution image)
Traditional breeding of crops existed since the beginning of human civilization. Today, it encompasses a whole range of techniques, including high-tech ones likemarker-assisted breeding. In traditional breeding, lots of genes are swapped at once, a process that can be “messy,” as described by Cornell plant breeder Margaret Smith. While breeders have been able to cross plants with their wild relatives (called a wide cross) to produce hybrids, the possibilities of using genes from distantly-related or other species are limited.
In the 1920s and 1930s, scientists explored the effect of radiation on a wide variety of plants. They found that applications of radiation produced mutations in plant genomes, creating plants that were different from the original. The Rio Star grapefruit was developed when Texas scientist Richard Hensz irradiated Rio Red grapefruit, which was created in a laboratory using x-rays after years of experimentation. The new grapefruit had darker flesh and greater resistance to cold, which helped it survive a severe freeze in 1983 that killed other grapefruit trees. Mutagenetically created Rios (Ruby Reds) can be sold as organic even though the process scrambles hundreds or thousands of genes and is not tested for allergenicity or the creation of novel proteins, as are GM crops. Since the 1940s, thousands of other crops have been produced with mutagenesis.
The National Academy of Sciences agrees that there is no justification for distinguishing between crops created through mutagenesis vs. genetic engineering, writing, “regulating genetically modified crops while giving a pass to products of mutation breeding isn’t scientifically justified.”
As molecular techniques in biology became available around the 1970s, scientists began to look at ways to alter genes in plants more precisely. They first focused on genetically engineering new traits into crops, from expressing natural pesticides inserted into the crop to adding vitamins to protecting it against deadly virueses--techniques popularly known as creating GMOs. RNA interference techniques allow scientists to switch off genes coding for undesired traits precisely, while recombinant DNA techniques allow them to insert genes coding for desired traits precisely. Other than allowing more precision in genetic modification, these molecular techniques also open up the possibilities of using genes from other species.
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
5a5a504d404cfde51882a5ad4ff3b890 | https://www.forbes.com/sites/jonentine/2014/06/24/profile-of-gilles-eric-seralini-author-of-republished-retracted-gmo-corn-rat-study/ | Profile of Gilles-Éric Séralini, Author Of Republished Retracted GMO Corn Rat Study | Profile of Gilles-Éric Séralini, Author Of Republished Retracted GMO Corn Rat Study
Gilles-Éric Séralini has republished his retracted study of herbicide resistant GMO maize and glyphosate in an obscure European open source journal. The Genetic Literacy Project's Jon Entine offers a detailed factual profile of the embattled French molecular biologist (along with a compilation of reactions from scientists from around the world).
Gilles-Éric Séralini (born August 23, 1960 in Annaba, Algeria, then known as Bône) is a French scientist who has been a professor of molecular biology at the University of Caen since 1991. He is best known for publishing research concluding that genetically modified food is unsafe for human consumption. He is president and chairman of the board of CRIIGEN (Committee of Independent Research and Information. He has published multiple studies alleging health risks associated with plant biotechnology which have been called flawed and biased by various regulatory and academic groups.
Séralini Career
Professor of Molecular Biology at the University of Caen, Laboratory of Biochemistry and Molecular Biology, I.B.F.A., Esplanade de la Paix, 14032 Caen Cedex, France email: gilles-eric.seralini arobase unicaen.fr Séralini studied in Nice and became a Doctor in biochemistry and molecular biology at the University of Montpellier in 1987. He left then for North America to carry out fundamental research for four years, at the University of Western Ontario and Laval University Medical Center, doing research on corticosteroid-binding globulin. Qualified to supervise research, he passed, at the age of 30, the French national competitive exam for University Professors.
Séralini chose to focus on the interface of cancer research and endocrinology at the University of Caen, where he was appointed professor in June 1991, a position he has held ever since. He wrote about 100 scientific articles and conference papers for international specialist symposiums, and a number of lectures with a nation-wide impact, he assumes several roles in the Commissions of the University of Caen, where he leads a research team associated to CNRS (French National Centre for Scientific Research) and INRA."
Research at CRIIGEN (Committee of Independent Research and Information on Genetic Engineering
Under the auspices of CRIIGEN, Séralini has published multiple studies claiming health risks associated with GMOs and the glyphosate-based herbicide Roundup on human cells and the enzyme aromatase in vitro, as well as rat testicular cells. His in vitro research has concluded that Roundup (the formulation with adjuvants, not just glyphosate) is toxic to cells in a dish, as well as that it is an endocrine disruptor. In 2013, the Séralini lab published a study in the Journal of Applied Toxicology that examined the effects of Cry1ab and Cry1ac insecticidal Bt toxins, as well as their effects in conjunction with Roundup, on HEK cells
In his most controversial research, in 2012, Séralini et al published a study in the journal Food and Chemical Toxicology (Volume 50, Issue 11, November 2012, pages 4221-4231) titled "Long term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize". Here is the original abstract of the Food and Chemical Toxicology paper.
This study informed the banning of genetically modified foods by the Kenyan government in November 2012[2]. On November 28, 2013, however, the journal[3] retracted the article due to strong criticism from the scientific community about the way the study was conducted. The editor, A. Wallace Hayes, wrote that he retracted the paper because it was "inconclusive,” claiming that this was consistent with Committee on Publication Ethics (COPE) guidelines, although others disagreed.
On June 24, 2014, the retracted study, in expanded form, this time including the data, was republished with the tile "Republished study: long-term toxicity of a Roundup herbicide and a Roundup-tolerant genetically modified maize" in an obscure open source journal, Environmental Sciences Europe—where Seralini has published before. The journal, part of SpringerOpen, is too young to have an official Impact Factor (IF). Using the same calculation, however, the journal would have an IF of .55. That would place it about 190th out of the 210 journals in the “environmental sciences” category at Thomson Scientific. (For comparison, Food and Chemical Toxicology has an IF of just above 3, and a ranking of 27th.)
This study is almost identical to the prior study, with some minor but important differences. Séralini claimed in a press release that the republished study was peer reviewed but that is not accurate, according to the publishing journal's editor made to Nature magazine. “We were Springer Publishing’s first open access journal on the environment, and are a platform for discussion on science and regulation at a European and regional level.” ESEU conducted no scientific peer review, said editor Henner Hollert, “because this had already been conducted by Food and Chemical Toxicology, and had concluded there had been no fraud nor misrepresentation.” The role of the three reviewers hired by ESEU was to check that there had been no change in the scientific content of the paper, Hollert added.
As before, the study claimed that rats fed a diet containing NK603—a seed variety made tolerant to the spraying of glyphosate (Monsanto's Roundup herbicide)—died earlier than those on a standard diet. The Séralini team reported that 50 percent of males and 70 percent of females died prematurely, compared with only 30 percent and 20 percent in the control group. The number of rats used in the study was too small to draw statistically meaningful conclusions. The study team also selected a breed of rat to use in the experiments in which 80 percent routinely develop cancers, further obscuring the results. Some of the rats fed GM corn outlived the control group, further confusing the picture. The newly-released study, as the first version, did not include any pictures of the control rats. Critical scientists say that is most likely because the type of rat used is tumor prone and would almost certainly show numerous tumors after two years of life; including pictures of control rats with tumors would further undermine Séralini's claims that the cancer was caused by the corn or glyphosate.
In 2014, Séralini et al. published a study in BioMed Research International claiming that pesticides were more toxic than regulatory bodies had previously thought. The study prompted Ralf Reski, one of the editors of the journal in which it was published, to resign. Reski said, "I do not want to be connected to a journal that provides [Séralini] a forum for such kind of agitation."
Séralini Affair
What became known as the Séralini Affair began in September 2012, and involved the publication of an experiment conducted by a group led by Séralini involving the feeding of of Monsanto's Round-Up-resistant NK603 maize (called corn in North America) and the herbicide Round-Up to rats, over the rats' two-year lifespan.
Séralini had required that journalists, in order to receive a copy of the paper prior to the press conference, sign a confidentiality agreement prohibiting them from contacting other researchers for comment before the press conference. During the press conference, Séralini also announced he was releasing a book and a documentary film on the research. The press conference received extensive coverage in the media.
In the paper and in the press conference, Séralini claimed that the results showed that Round-Up-resistant maize and Round-Up are toxic. The abstract indicates: "The health effects of a Roundup-tolerant genetically modified maize (from 11% in the diet), cultivated with or without Roundup, and Round-Up alone (from 0.1 ppb in water), were studied 2 years in rats. In females, all treated groups died 2–3 times more than controls, and more rapidly. This difference was visible in 3 male groups fed GMOs. All results were hormone and sex dependent, and the pathological profiles were comparable." The study used 200 Sprague-Dawley rats, 100 male and 100 female, and divided them into twenty groups with 10 rats each; ten experimental conditions were tested on male rats and separately on female rats for two years.
Other long-term studies, which were publicly funded, have uncovered no health issues. The Japanese Department of Environmental Health and Toxicology released a 52-week feeding study of GM soybeans in 2007, finding "no apparent adverse effect in rats." In 2012, a team of scientists at the University of Nottingham School of Biosciences released a review of 12 long-term studies (up to two years) and 12 multi-generational studies (up to 5 generations) of GM foods in the same journal that published the Seralini paper, concluding there is no evidence of health hazards."
The release of the book and movie in conjunction with the scientific paper, and the requirement that journalists sign a confidentiality agreement, were also criticized and negatively peer reviewed.
Scientific evaluation
As summarized on Wikipedia, the study was widely criticized. The London-based Science Media Centre, which assists reporters when major science news breaks, posted an entire page of criticisms,Scientists claimed that Séralini's conclusions were impossible to justify given the experimental design – the small sample size together with the length of the study together with the known high incidence of tumors in the species of rats used.
The paper was also challenged by numerous food standards agencies. Many claimed that the conclusions were impossible to justify given the statistical power of the study. Sprague-Dawley rats have a lifespan of about two years and have a high tendency to get cancer over their lifespan (one study found that over eighty percent of males and over seventy percent of females got cancer under normal conditions). The Séralini experiment lasted the normal lifespan of these rats, and the longer the experiment went, the more statistical "noise" there was – the more rats get cancer naturally, regardless of what was done to them. For the experiment to have adequate statistical power, all the groups – control groups and test groups – would have to number at least 65 rats per group in order to sort out any experimentally caused cancers from cancers that would occur normally – but the Séralini study had only ten per group.
OECD (Organisation for Economic Cooperation and Development) guidelines recommends 20 rats for chemical-toxicity studies, and 50 rats for carcinogenicity studies. In addition, if the survival of the rats is less than 50% at 104 weeks (which is likely given the Sprague-Dawley rats used in the study) the recommended number of rats is 65.
Dr. Francis Nang'ayo of the African Agricultural and Technology Foundation[4] criticized the study for having used rats that were susceptible to cancer. "In science, the sample size for a study of such a magnitude should be at least 50 yet Seralini used only ten rats which to me greatly compromise the findings," added Mr. Nang'ayo.
King's College London Professor Tom Sanders wrote that since Sprague-Dawley rats are susceptible to mammary tumors when food intake is not restricted, data should have been provided about how much food the rats were fed (as well as the presence of fungus in the feed, another confounder). Sanders also wrote of this study, "The statistical methods are unconventional ... and it would appear the authors have gone on a statistical fishing trip."
The Washington Post quoted food activist and GMO critic Marion Nestle, the Paulette Goddard professor in the Department of Nutrition, Food Studies and Public Health at New York University: "'[I] can’t figure it out yet....It’s weirdly complicated and unclear on key issues: what the controls were fed, relative rates of tumors, why no dose relationship, what the mechanism might be. I can’t think of a biological reason why GMO corn should do this.....So even though I strongly support labeling, I’m skeptical of this study.'" University of Calgary Professor Maurice Moloney, among others, wondered why there were so many pictures in the study, and in sympathetic news reports about it, of treated rats with horrific tumors, but no pictures of the rats in the control group.
Many national food safety and regulatory agencies reviewed the paper and condemned it. The German Federal Institute for Risk Assessment VP Reiner Wittkowski said in a statement, "The study shows both shortcomings in study design and in the presentation of the collected data. This means that the conclusions drawn by the authors are not supported by the available data."
A joint report by three Canadian regulatory agencies also "identified significant shortcomings in the study design, implementation and reporting." Similar conclusions were reached by the French HCB and the National Agency for Food Safety, the Vlaams Instituut voor Biotechnologie, the Technical University of Denmark, Food Standards Australia New Zealand, the Brazilian National Technical Commission on Biosafety, and the European Food Safety Authority (EFSA). The conclusions of the EFSA evaluation were:
The study as reported by Séralini et al. was found to be inadequately designed, analysed and reported...The study as described by Séralini et al. does not allow giving weight to their results and conclusions as published. Conclusions cannot be drawn on the difference in tumour incidence between treatment groups on the basis of the design, the analysis and the results as reported. Taking into consideration Member States’ assessments and the authors’ answer to critics, EFSA finds that the study as reported by Séralini et al. is of insufficient scientific quality for safety assessments.
The European Federation of Biotechnology lobby, which counts Monsanto and other GM firms among its members, called for the paper to be retracted, calling its publication a "dangerous failure of the peer-review system."
Six French national academies (of Agriculture, Medecine, Pharmacy, Science, Technology and Veterinarians) issued a joint statement – "an extremely rare event in French science"– condemning the study and the journal that published it. The joint statement dismissed the study as 'a scientific non-event'. The Food and Chemical Toxicology journal, an Elsevier imprint, has a full peer review process, and at least three scientists were needed to endorse the Seralini article prior to publication. The journal in question published a statement in their November 2012 issue, that "the Editors have encouraged those people with concerns to write formally to the Editor-in-Chief, so that their views can be publicly aired."
In March 2013, the same journal that published the Seralini study, published a letter from Erio Barale-Thomas, Principal Scientist of Johnson & Johnson Pharmaceutical Research and Development and the President of the Conseil d’Administration of The Société Française de Pathologie Toxicologique (SFPT, French Society of Toxicologic Pathology. SFPT is "a non governmental/non profit organization formed by veterinarians, physicians, pharmacists and biologists specialized in veterinary and toxicologic pathology. The letter criticized the Seralini study on several fronts, and concluded: "However, given this study presents serious deficiencies in the protocol, the procedures and the interpretation of the results, the SFPT cannot support any of the scientific claims drawn by the authors, and any relevance for human risk assessment. This letter presents the consensus scientific opinion of the Conseil d’Administration of the SFPT."
As a result of the publication of the Séralini paper, the Belgian Federal Minister of Public Health asked the Belgian Biosafety Advisory Council (BBAC) to evaluate the paper. The BBAC was asked to "inform the Minister whether this paper (i) contains new scientific information with regard to risks for human health of GM maize NK603 and (ii) whether this information triggers a revision of the current authorisation for commercialisation for food and feed use of this GM maize in the European Union (EU). Responding to the two point mandate, the BBAC committee, whose members are drawn from the Belgian biotech Professoriat, pointed out that "the long duration of this study is a positive aspect since most of the toxicity studies on GMOs are performed on shorter periods," and concluded:
"Given the shortcomings identified by the experts regarding the experimental design, the statistical analysis, the interpretation of the results, the redaction of the article and the presentation of the results, the Biosafety Advisory Council concludes that this study does not contain new scientifically relevant elements that may lead to reconsider immediately the current authorisation for food and feed use of GM maize NK603. Considering the issues raised by the study (i.e. long term assessment), the Biosafety Advisory Council proposes EFSA urgently to study in depth the relevance of the actual guidelines and procedures. It can find inspiration in the GRACE project to find useful information and new concerted ideas."
Support for Séralini paper
According to Wikipedia, Séralini defended the study design, the interpretation of the results, and manner and content of the publication. Support for the study came from ENSSER (European Network of Scientists for Social and Environmental Responsibility), of which CRIIGEN, the institute that Seralini founded and that funded the study, is a member. A study funded by and conducted in consultation with ENSSER also found that EFSA applied double standards. An open letter in support of Seralini's article, signed by about 300 scientists, doctors, scholars and activists, was published in Independent Science News, a project of the Bioscience Resource Project, both of which oppose GM crops.
The German research group Testbiotech, which opposes GMOs and which believes that regulators have been captured by the biotech industry, posted a report critical of the EFSA's reaction to the study as not applying the same standards to studies submitted by industry as it did to Seralini's study.
A statement opposing the controversy, and especially the attacks on Seralini, was published in the newspaper Le Monde and was signed by 140 French scientists; the letter said: "We are deeply shocked by the image of our community that this controversy gives citizens. Many of the threats to our planet have been revealed by scientists isolated and confirmed by many studies coming from the scientific community. In this case, it would be more efficient to implement research on the health and environmental risks of GMOs and pesticides, improve toxicological protocols used for placing on the market and finance a variety of researchers in this domain...."
Reaction in the media
The press conference led to wide coverage in the media, which "energized opponents of GM food, especially in Europe". Le Nouvel Observateur covered the press conference in a story called, "Yes, GMOs are poisons!".
As Jon Entine put it in Forbes, "Seralini's research is anomalous. Previous peer-reviewed rat feeding studies using the same products (NK603 and Roundup) have not found any negative food safety impacts.
Andrew Revkin dubbed it another instance of "single-study syndrome", and contended that the study was in support of an "agenda".
Henry I. Miller, writing for Forbes, said of the study that "the investigators have refused to release all the data from the experiment, which constitutes scientific misconduct." Séralini responded by saying, "...that he won't make any data available to the EFSA and the BfR until the EFSA makes public all the data under-pinning its 2003 approval of NK603 maize for human consumption and animal feed."
The Guardian's Environmental Blog stated that the study linking GM maize to cancer "must be taken seriously by regulators" and that although it "attracted a torrent of abuse", "it cannot be swept under the carpet". It noted that CRIIGEN funded the research although it did not report the source of the funds from organic interests and Greenpeace, which are vocal in opposition to genetic modification, and reported Séralini's response: namely, that studies in support of GM food are usually funded by "corporates or by pro-biotech institutions".
GMO-Seralini
The Seralini research claims are officially promoted via a website run and managed by U.K. organic exporter, Sustainable Pulse publisher and anti-GMO activist Henry Rowlands.[5]
Advocacy
Funding
Séralini's research campaign (reported to be more than 5 million Euros was funded in part with more than 3.2 million Euros by French organic food giants Auchan and Carrefour.[6] A million euros were also donated by the the Fondation pour le progrès de l’homme (FPH - Foundation for the Progress of Humankind), a foundation with a reputation for generosity towards an assortment of anti-GMO groups. Séralini's work is also funded by the activist group Greenpeace.
Publicity for the release of his GMO rat feeding study claims was coordinated by the Sustainable Food Trust (SFT) lead by former UK organic industry Soil Association executive director Patrick Holden. A PR agency called Greenhouse PR managed the events, with media releases, sample tweets etc. and a press release telling media that "for pictures of the rats contact Greenhouse PR" (this page has since been removed from the Sustainable Food Trust website). Greenhouse PR also helped Sustainable Food Trust leverage Patrick Holden's close relationship with the Prince of Wales to try to secure positive media coverage for Séralini's controversial GMO corn study. Its website says: "Greenhouse helped organise a series of events hosted by Patrick Holden at Highgrove Farm, home of the Prince of Wales. Events were attended by leading industry opinion formers and key media and included off-the-record debates on issues related to the future of food and farming, followed by a guided tour of the farm.
To raise awareness of GM on behalf of the SFT, Greenhouse also launched peer-reviewed scientific research into the impact of GM feed on the health of rats, accompanied by an educational website calling for more regulation and research."[7] Former SFT staffer Henry Rowlands, now an organic marketing exporter and publisher, hosts and maintains the GMO-Seralini official websites. Seralini is linked to a company called Sevene Pharma, where he is a consultant. The company sells homeopathic remedies. He is also reportedly linked to the 'Invitation to Life' cult.
Books
Séralini, Gilles-Éric (2004). Ces OGM Qui Changent Le Monde. Flammarion Publishing. ISBN 2080800620. Séralini, Gilles-Éric (2006). Après Nous le Déluge?. Flammarion Publishing. ISBN 2082105490.
Criticisms
Seralini's claims and tactics have been heavily criticized by the regulatory, academic and science watchdog communities. Examples include:
CRIIGEN evaluation, by David Tribe, GMO Pundit, December 2012.
Genetically modified corn and cancer – what does the evidence really say?, by Ashley Ng, The Converation, September 2012.
700 Researchers Call On Gilles-Eric Seralini To Release GMO Test Data, Science 2.0, October 2012.
Being Gilles-Eric Seralini: Inside the mind of the anti-GM movement - See more at: http://www.geneticliteracyproject.org/2013/06/10/being-gilles-eric-seralini-inside-the-mind-of-the-anti-gm-movement/#sthash.RVnii37e.dpuf, by Jon Entine, Genetic Literacy, June 2013.
Scientists smell a rat in fraudulent study, by Bruce Chassy & Henry Miller, Forbes Magazine, September 2012.
Anti-GM corn study reconsidered: Seralini finally responds to torrent of criticism, AEI, November 2012.
Seralini paper influence Kenya ban of GMO imports, by Emily Willingham, Forbes Magazine, December 2012.
Seralini GMO rats & Cancer, Tech 'N You, 2013.
Why I think the Seralini GM feeding trial is bogus, by Andrew Kniss, Control Freaks, September 2012.
Anti-GMO Scientist Gilles-Eric Seralini, Activist Jeffrey Smith Withdraw from Food Biotech Debate, by Jon Entine, Forbes Magazine, May 2013
Seralini anti-Monsanto study was so poorly conducted it harms the anti-GMO movement, The Daily Paul, September 2012.
The Seralini Rule, Skeptico, June 2013. Excerpt: "I have a new rule for debating anti-GMO people: If you favorably cite the 2012 Séralini rats fed on Roundup ready maize study, you just lost the argument..."
March Against Monsanto, The Progressive Contrarian, May 2013. Excerpt: "I mentioned Seralini and Smith were frauds who refuse to publicly debate scientists who want to challenge them..."
Science says GMOs are safe, Skeptical Raptor, June 2013. Excerpt: "But be forewarned, if it is junk science, I will call it junk science, like Gilles-Eric Séralini et al.’s paper about GMO corn causing cancer. Except it was poorly designed, utilized bad statistics, and really provided no evidence whatsoever for anything except that Séralini is an incompetent scientist..."
Was Seralini GMO study designed to generate negative outcome, Storify, October 2012.
References
1 Gilles Eric Séralini – President of the Scientific Board – Molecular Biology Professor , by HH, CRIIGEN website, November 12, 2008. 2 http://gantdaily.com/2013/11/27/is-africa-ready-for-genetically-modified-foods/ 3 http://www.journals.elsevier.com/food-and-chemical-toxicology/news/journal-statement/ 4 http://mobile.nation.co.ke/business/Study-on-GMOs-withdrawn/-/1950106/2093496/-/format/xhtml/-/lt3pkm/-/index.html 5 http://gmoseralini.org/en/ 6 http://gmopundit.blogspot.com/2012/09/auchan-and-carrefour-financed-criigen.html 7 http://greenhousepr.co.uk/clients/about/sustainable-food-trust/
More on genetics and science literacy at the Genetic Literacy Project
Follow Jon on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
9f303fcf046b638fae161d7038318f83 | https://www.forbes.com/sites/jonentine/2014/08/14/got-soy-milk-not-consumer-reports-which-throws-science-under-the-bus-in-warning-about-gmo-soy/ | Got Soy Milk? Not Consumer Reports, Which Throws Science Under The Bus In Warning About GMO Soy | Got Soy Milk? Not Consumer Reports, Which Throws Science Under The Bus In Warning About GMO Soy
Consumer Reports has been a sacred bible for me, providing evidence-based analysis of every day products that saves a lot of money without sacrificing quality. I’m 62 and I don’t believe I’ve ever bought a car without consulting CR first. Which makes it all the more distressing that this once venerable institution-in-a-magazine has driven off the science cliff in obeisance to the current hysteria—yes, we are listening Neil deGrasse Tyson—over genetically modified foods.
The latest consumer report, “Milk Alternatives: Should You Sip or Skip,” addresses alternatives to dairy milk, such as soy, coconut and almonds. As Kevin Folta, head of the University of Florida plant technology program writes, the editors found fault in almost all of the milk alternatives, for instance pointing out that some contain “heavy metals.”
Heavy metals are a new obsession on the green left, particularly on alternative product sites that try to scare people into buying ‘natural’ products that are often untested, useless or worse.
Every major science agency around the world has determined that the kind of levels CR is noting are biologically meaningless, but that hasn’t stopped it from issuing scare-o-grams. It’s a missed opportunity to educate consumers about the difference between unwarranted fears and genuine dangers, but that’s the direction CR has been heading in in recent years.
But the most egregious CR development is its unexplained dissing of GMO soy milk. Upwards of 94% of the US soy crop is GMO so it’s no surprise that your favorite edamame or your morning glass of soy milk is made from soy beans designed to be grown with fewer insecticides (Bt soy) or less toxic herbicides (herbicide resistant soy). In its “Cons” section, CR encourages consumers to “Look for brands with the USDA organic seal or the non-GMO verified label.”
But why? It never explains, and based on CR’s stated intention to rely on evidence to form its judgments, it's violating its own guidance. After all, there is not one published study that suggests that GM soy products are any less nutritious than alternatives; nor are they, or any approved GMO food, harmful in any way. In fact, it’s well established that organic products, including soy milk, are more likely to have a higher risk of pathogen contamination.
This is not the first time that activists have brandished their anti-science club to quash empirical based thinking on the issue of soy milk. In 2009, Dean Foods , owner of the Silk Soymilk brand, faced a barrage of criticism from food activists when it switched from organic to conventional soybeans, calling it their “natural” line—which was correct. The move was actually done for the best of reasons. Most certified organic soybeans are sourced from countries with low or non-existent labor standards; the switch to conventional soy means that workers would not being exploited to satisfy Silk Soymilk customers.
The radical Organic Consumers Association went on a rampage, calling for a boycott—and it was effective. Rather than stand by its ideals and its fair labor commitment, Dean caved, throwing its worker protection pledge under the bus to embrace the anti-GMO scare campaign. It now positions itself as a driver of the fear bus. “GMOs? No thanks,” Silk.com writes on its web page. “We think the less you mess with Mother Nature, the better.” Hmm, soy milk is a processed food; you have to do considerable “messing” to get from the soy bean to the grocery store. Here’s a primer on how it’s processed.
Consumer Reports, of course, ignores all of this. As Folta notes, CR also offers a backdoor endorsement of organic and non-GMO verified brands—again, without any evidence to back it up.
The only difference between soy milk made with GM soy and alternatives is that the substituttes would almost certainly cost a heckuva a lot more because of the price premium extracted by organic producers. In other words consumers would be paying more for no benefits.
Is that the kind of advice that you want from Consumer Reports? How to spend more for new real benefits?
Who is steering this anti-science? The key driver is almost certainly Michael Hansen, a senior staff scientist long known as an outlier on the GM issue. Hansen tends to hype the alleged potential dangers of GMOs in contrast with every major independent global science oversight committee in the world. Geneticist Val Giddings has provided a comprehensive analysis of the flaws in Hansen’s work at state legislatures where he is often asked to speak because of his connection to CR.
Hansen’s director, Jean Halloran, is little better. She long ago embraced the style views of the activist Organic Consumers Association, referring to GM crops as “Frankenfoods,” and claiming that the US food regulatory oversight of GMO safety is a “fraud."
If you’re angry at this kowtowing by CR to the anti-science crowd, you can express your opinion directly by clicking through here and telling the editors of your concerns.
More on biotechnology, genetics and science literacy at the Genetic Literacy Project
Follow @JonEntine on Twitter
Jon Entine, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University.
|
cd30ebb5246e4ff659d917373d8cb235 | https://www.forbes.com/sites/jonentine/2014/08/25/why-liberal-americans-are-turning-against-gmo-labeling/ | Why Liberal Americans Are Turning Against GMO Labeling | Why Liberal Americans Are Turning Against GMO Labeling
Europeans and many other countries that consider themselves "liberal minded" scratch their heads over why there is such a big controversy in the United States over the labeling of foods that contain genetically modified ingredients. Sixty-four nations around the world have enacted mandatory labeling laws.
“GM foods are not proven safe. Why not just label them and let the consumer decide?” is a common thread on food blogs. “They must be kowtowing to the GMO lobby.”
That’s what has been characterized as the liberal position: the consumer’s right to know. Many activist groups lobbying for labeling cite a New York Times poll that 93 percent of Americans support it.
So why do the leading independent science organizations in the US and the country's top liberal news publications oppose mandatory labeling?
The federal government has resisted calls to label GM foods on the grounds that there is no substantial difference between them and conventional or organic food. That’s the correct scientific position. Genetic modification is a process. There is no detectable difference between, say, sugar made from GM or organic sugar beets.
The pressure for labeling is coming from legislatures in liberal states such as New York, California, Oregon and Massachusetts, where anti-GMO groups are lobbying relentlessly. Earlier this year, Vermont became the first state to require any genetically modified foods to carry a label, although it is being challenged in court. It likely won't be the last state. Oregon voters will decide on a similar measure in November and about 25 other states have proposed mandatory labeling legislation this year.
But a curious thing is happening. The most enlightened liberal thinkers and the progressive publications in key states are joining with the science establishment to oppose mandatory labeling.
The pro-labeling arguments, they say, boil down to two deceptive talking points: GMOs may be unsafe and are untested—the Frankenfood argument; and GMOs are part of a corporate plot to monopolize the food system—the Argumentum Monsanto, according to science writer Brian Dunning.
Neither is supported by the evidence. [A] labeling requirement would only serve to confuse consumers,” editorialized the Boston Globe, on 30 July, becoming the latest progressive publication, to oppose a statewide measure. “Advocates say it would alert those who may object to genetically modified foods to choose other options. But the mere fact of a label would contribute to the stigmatization of food that is actually perfectly healthy. Besides, there’s already an easy solution for the GMO-wary buyer: Labels that tout foods that are not genetically modified.”
The most strident opposition to labeling is on science grounds. As the Washington Post wrote in June, “There is no mainstream scientific evidence showing that foods containing GMOs are any more or less harmful for people to consume than anything else in the supermarket, despite decades of development and use.”
[T]here is no reliable evidence that genetically modified foods now on the market pose any risk to consumers,” noted The New York Times.
The US Congress has so far rebuffed attempts at mandatory national labeling in part because every major science organization in the world, citing hundreds of independent studies, from the World Health Organisation to the German Academy of Sciences, with many overseen by the European Union, have issued statements reassuring the public about the safety of GM foods and the independence of the global food supply.
While conventional breeding swaps giant chunks of DNA between one plant and another, genetic engineering is far more precise, is less likely to produce an unexpected result, and is pre-tested and monitored after release. Many of the very same organizations that have publicly stated the dangers of global warming have noted that GM foods are as safe or safer than conventional or organic foods.
They are also more sustainable in many cases because they require lower “inputs”—some GM crops, like Bt sugar beets, are engineered to use natural bacteria to repel pests, all but eliminating the use of toxic insecticides—and result in higher yields. About-to-be-introduced vitamin enhanced or toxicity reduced GM foods such as cassava, rice and potatoes will offer consumers clear nutritional benefits.
Scientific American, long regarded as one of the most independent science sources in the world, in its editorial “Labels for GMO Foods Are a Bad Idea,” made the case that labeling will spread scientifically inaccurate information that could harm human health and slow the development of agricultural biotechnology—which while not a silver bullet could play a key role in increasing the global food supply as population pressures escalate in coming decades.
“Antagonism toward GMO foods also strengthens the stigma against a technology that has delivered enormous benefits to people in developing countries and promises far more.” SA wrote. “Ultimately, we are deciding whether we will continue to develop an immensely beneficial technology or shun it based on unfounded fears.”
None of these arguments is apt to sway committed opponents of biotechnology. Just do it, they say; it’s as simple as printing a label, and it has worked in Europe.
But has it?
Scientists, and increasingly independent liberal thinkers, are opposed to mandatory labels precisely because scientists don’t want to replicate what’s happened in Europe: a lack of choice of foods, consistently higher food prices, and an increase in the use of more toxic pesticides, all because GMO foods are shunned.
The stigma encouraged by opponents of agricultural biotechnology comes at a high price, say some independent researchers. A recent joint study by epidemiologists and economists examining the costs of not deploying this technology in a country like India estimated that it’s cost billions of dollars and 1.4 million life years over the past decade in that country alone.
The most prominent labeling supporters in the US—all backed by the large and growing organic food lobby, who know that the driver of consumer sales is the unsupported belief that organic foods are safer and more nutritious—have made it quite clear that the consumer choice is not top of their consumer rights wish list.
“If we have it labeled, then we can organize people not to buy it,” notes Andrew Kimbrell, head of the Center for Food Safety. “GM foods must be banned entirely, but labeling is the most efficient way to achieve this,” says Joseph Mercola, a wildly popular web based natural products entrepreneur whose income depends on selling alternative health products.
What about that poll that shows that more than 9 in 10 consumers want labeling? It’s less than meets the idea. When American consumers are asked a less loaded question—whether there is any additional information that would like on their labels that’s not there now—only 4% said they support labeling.
The leaders of the labeling movement play the ‘right to know card’ as a subterfuge to scare people about the safety of the conventional food system and to divert attention from the sustainability benefits of GMOs. They want to kill crop biotechnology. “With labeling, [GMO’s] will be zero,” says Vandana Shiva, the Indian activist best known for promoting the false belief that GMOs have resulted in mass genocide in her home country.
Are there tradeoffs in adopting crop biotechnology or large-scale agriculture? Of course, and there is room for healthy dialogue. But make no mistake: Food safety and transparency are not on the pro-label groups agenda in the United States.
* * *
JON ENTINE, executive director of the Genetic Literacy Project, is a senior fellow at the Center for Health & Risk Communication and STATS (Statistical Assessment Service) at George Mason University. You can follow @JonEntine on Twitter, and find more on biotechnology, genetics and science literacy at the Genetic Literacy Project.
|
736b4f4fcdef81f79f98d7d7b870963b | https://www.forbes.com/sites/jonfortenbury/2014/09/01/has-the-ice-bucket-challenge-changed-healthcare-fundraising-forever/?ss=pharma-healthcare | Has The Ice Bucket Challenge Changed Healthcare Fundraising Forever? | Has The Ice Bucket Challenge Changed Healthcare Fundraising Forever?
There’s no shortage of Ice Bucket Challenge knockoffs. From giving rice to the needy to taking a pie to the face for suicide prevention, many are trying to start a challenge that will echo what the ALS Association received from the Ice Bucket Challenge in one month, which as of Aug. 29 was $100.9 million from over three million donors. That level of fundraising success in such a short period of time is unprecedented, for ALS Association or anyone, and everyone with a passionate cause is trying to reach it.
Will any other challenges take off, changing healthcare fundraising for the long haul, or was the Ice Bucket Challenge an isolated success story?
According to Doug White, director of the Master of Science in Fundraising Management program at Columbia University, the success of the Ice Bucket Challenge was viral happenstance that shouldn’t try to be replicated by other charities.
“Charities may get the impression from this challenge that it’s easy to make money if you find a gimmick and get people to do it,” said White, who has over 30 years of nonprofit leadership experience. “But charities need to do more work at maintaining relationships or growing them, since 40-50 percent of new donors don’t come back.”
White thinks that a major reason why the Ice Bucket Challenge has been so successful—beyond its perfect timing, public model and sheer fun— is its novelty. Since people eventually “get tired of even exciting things,” there’s a “lifespan to this kind of thing,” White says. Another success story like this is not impossible, but too unique and remarkable to plan for or expect, according to White.
Bravelets COO Elisabeth Nakielny doesn’t think the challenge-concept is a fleeting success, pointing to the No Makeup Selfie for Cancer Awareness, which was a campaign that raised £8m in six days for Cancer Research UK in March. Many of those same people who took bare-face photos of themselves in the spring are now dumping ice water over their heads, which goes to show “we’re all looking for that next thing that’s going to take off,” Nakielny says.
“That doesn’t mean that every campaign will be successful, but there’s still room for many of these challenges to be successful in any given year,” said Nakielny, whose company sells jewelry to raise money for various causes.
Nakielny thinks challenges like the Ice Bucket Challenge have potential for global reach, since they make the challenge-concept that’s been largely limited to prep-heavy events like marathons and walks more accessible. As for cultivating relationships with existing donors, she doesn’t see why that can’t coexist with quick, fun campaigns.
No other similar campaigns to the Ice Bucket Challenge, like the HD Pie in the Face Challenge and Doubtfire Face for Suicide Prevention, have raised in a month anywhere near what the ALS Association raised in America, nor have nonprofits that have had fun and great ideas and a wide-reach network, even after the creation of social media. This has people wondering why the Ice Bucket Challenge was so successful. Was it the fact that the Ice Bucket Challenge was big, simple and selfless? Or personal, social and feel-good? Or just flat-out, it's hot outside and we're sick of negative news? Everyone's throwing around theories and many are plugging away at what they think the formula of its success is by creating new challenges. But it doesn't seem like anyone has definitively figured it out yet.
Until then, we're bound to see many more challenges and it's unclear which will stick. It could be something like a mayonnaise slip n’ slide or hot sauce challenge, as York Technical College English instructor William Folden saw when his students turned in their own challenges for a homework assignment. Or the challenge-concept could die off entirely and nonprofits return to the drawing board.
It’s anyone’s guess at this point.
|
7a7a5723206fc3aa69985d4ed826d122 | https://www.forbes.com/sites/jongalaviz/2013/06/16/the-biggest-turnaround-job-of-2013-the-irs/ | The Biggest Turnaround Job Of 2013: The IRS | The Biggest Turnaround Job Of 2013: The IRS
WASHINGTON, DC - JUNE 6, 2013: Danny Werfel, acting IRS commissioner, arrives at a House Oversight... [+] and Government Reform Committee hearing on Capitol Hill. The committee is hearing testimony on IRS spending and conference abuses. (Image credit: Getty Images via @daylife)
The IRS is in trouble, deep trouble. With over 100,000 employees and a budget exceeding $12 billion, the government agency may be the largest organization in crisis in America today. It is also the most visible turnaround project so far in 2013, corporate or government.
The IRS’ relatively new Acting Commissioner, Mr. Daniel Werfel, is under tremendous pressure by both the White House and Congress to demonstrate leadership in restructuring the IRS in rapid-fire fashion.
With about a month on the job, he has a big road ahead of him - he was appointed to his acting position effective May 22, 2013.
In about a week from now Mr. Werfel is to provide Congress and the White House a comprehensive 30-day report on his interim findings at the IRS. The findings will certainly be illuminating.
Democrats and Republicans seem to bicker all the time. This time both parties are on the same page - they want Mr. Werfel to succeed in a job that many have described as being 'the most thankless job in America'.
And while most in main-street America probably don't know of Mr. Werfel, they want him to succeed too (your author does not intend to propose the idea that Americans want the IRS to succeed in all of its endeavors).
America has some of the world’s greatest turnaround stories in private industry, but in government....let’s just say we don't have a lot of examples to point to.
The business world has a lot of case studies, examples, and CEO's that can talk about what it means to do a turnaround. Mr. Werfel might want to consider borrowing some concepts from the business world and apply them to the task before him at the IRS.
He may want to consider the possibility that the same people who hired him for this project are the same people who created the crisis in the first place. This possibility should always be on his mind. A good turnaround specialist always knows who the ultimate constituency really is. In this case, his ultimate constituency is the American people.
When you are anointed as the turnaround specialist you get some weird perks. One of those perks is that Mr. Werfel gets the right to poke around everywhere in the IRS. His first couple of months in office may be the best time for him to play the 'dumb boss' roll - by observing the critical actors in the organization.
In corporate America the best turnaround specialists are usually grounded in strong ethical foundations. To outsiders it may seem that corporate turnaround specialists are anything but ethical, but behind the scenes these individuals bank on their long-term reputations. It will be important for Mr. Werfel to ensure that he maintains his moral compass as he proceeds with a government turnaround.
Transparency will also be key.
When one is asked to clean up a bad situation, it is always best to be 100% open with everybody. Be transparent. Be open with your constituents, be open with your organization, and be open with those that made the mistakes. The more transparent you are, the more people will see you as an agent of positive change rather than a ‘blame redirection specialist’.
Sure, there will debates and arguments about strategy. There will be internal enemies. There will be allies made. But Mr. Werfel must be absolutely prepared for the truth, because it will come out sooner or later. He must be prepared to hear the truth, deal with it, and report on it.
The last thing that Mr. Werfel should remember is that nobody will give him genuine thanks - not even the people who hired him. Even if Mr. Werfel does the best of jobs, they won't say thanks beyond the public niceties.
But what Mr. Werfel should be confident of knowing is this: if he completes a successful turnaround of the IRS, he will be in demand by more large corporations than he could ever possibly imagine.
|
9c5b9c311a63fa8156dc602fd0761de9 | https://www.forbes.com/sites/jonhartley/2014/09/08/draghis-case-for-ecb-quantitative-easing/ | Draghi's Case For Quantitative Easing in Europe | Draghi's Case For Quantitative Easing in Europe
Mario Draghi, president of the European Central Bank, has finally announced that the bank plans to engage in a form of quantitative easing through the purchase of private sector credit, including asset-backed securities and covered bonds, in addition to a new cut in interest rates (the benchmark refinancing rate has been cut from 0.15% to 0.05% and the deposit rate has been cut from -0.1% to -0.2%). The policy decision follows the annual inflation rate in Europe falling to a five year low of 0.3%, well below the ECB’s 2% target, while the unemployment rates in Italy, Spain, France and Greece remain in the double digits. Though the long-term asset purchase program and additional rate cut has been opposed by the German Bundesbank chief and Angela Merkel ally, Jens Weidmann, the important question is how effective the new long term asset purchase program will be together with the recently announced bank lending program, particularly without the type of pro-growth structural reforms that Draghi said were missing from Eurozone governments during his recent Jackson Hole speech.
European Central Bank President Mario Draghi (Photo Credit: Daniel Roland/AFP/Getty Images)
Europe flirts with deflation while unemployment remains high
The most recent Eurozone annual inflation rate figure for August of 0.3% ahead of the ECB governing council meeting last week underscored the call for further monetary stimulus to get back to the central bank’s 2% target. Serious concern has mounted surrounding the potential for deflation, as year-over-year Euro area inflation rate has been below 1% for nearly a year.
Euro Area Inflation (Annual % Change)
Source: European Central Bank
With respect to the labor market in Europe, as of July 2014, the unemployment rates for Greece (27.2%), Spain (24.5%), Italy (12.6%), and even France (10.3%) remained in double digit territory as the Euro area weighted average unemployment rate was left at 11.5%. This is in stark contrast to the U.S. and Germany who now have unemployment rates of 6.1% and 5.1% respectively following much stronger economic recoveries relatively untarnished by major debt crises.
Getting to the heart of this divergence, while Europe and the US shared in soaring unemployment during the Great Recession, the Euro area began a second rise in unemployment that peaked in April 2013 reflecting a six quarter recession emanating from the Greek sovereign debt default which caused panic in sovereign bond markets across Europe. Unlike the Great Recession which affected virtually all euro area economies, the majority of the job losses observed in the second recession were concentrated in countries that were adversely affected by government bond market tensions. For Draghi, the weak labor market following this second European recessionary period merit further commitment to accommodative policy, including cutting interest rates to new lows, announcing a new bank lending program, and credit easing through the purchase of asset-backed securities (ABS).
Nevertheless, Draghi made clear in his recent Jackson Hole remarks that “No amount of fiscal or monetary accommodation, however, can compensate for the necessary structural reforms in the euro area”, calling for permanent pro-growth fiscal reforms among Eurozone member countries which have yet to materialize.
U.S. and Euro Area change in the unemployment rate since 2008
Source: European Central Bank/Mario Draghi
ECB quantitative easing will consist of ABS and covered bond purchases, but not sovereign bond purchases
The aim of the new ABS and covered bond purchase program is to increase the size of the ECB's balance sheet, and hence depreciate the euro, which can then spur exports and a wealth effect stemming from higher asset prices. In essence, the ECB wants to return the size of its balance sheet to what it was at the start of 2012 when the balance sheet was at €3 trillion or $3.94 trillion (the balance sheet currently sits at just over €2 trillion). This implies a possible €1 trillion ($1.29 trillion) expansion embodied in the new stimulus measures.
With limited issuance currently of asset backed securities in the Eurozone, this type of quantitative easing will be slower to take effect than the outright purchase of sovereign government bonds like forms of quantitative easing engaged in by the Bank of Japan and the Federal Reserve (the Fed also bought MBS as part of its asset purchase program). The size of the ABS market in Europe is relatively small, limited to the hundreds of billions of euros. Part of the aim of the ECB's program is to entice more ABS and covered-bond issuance, by building up demand for the asset classes, to further the availability of credit to the real economy. That being said, the ECB's last covered-bond program, in 2011-2012, failed to reach its target amount.
Securitized loans market in the Euro area
Source: Securities Industry and Financial Markets Association (SIFMA)
This does beg the question of why the ECB does not just go ahead and buy sovereign government bonds outright if Europe is to get involved in quantitative easing? As part of the recent announcement, Draghi said that he didn't and won't rule out sovereign government bond purchases as an option going forward. However, there is a good question of to what extent long-term government bond purchases would be effective at further lowering government bond yields across Europe which are already at historic lows, down from the heights of the European sovereign debt crisis.
U.S. and Eurozone country 10-year bond yields (%)
Source: Federal Reserve Bank of St. Louis European Central Bank
Ultimately, this raises the question of whether monetary policy in Europe is finding its limit. In many ways this echoes the statement by Draghi that no amount of monetary accommodation can compensate for the necessary structural reforms in the euro area needed for the economy to fully recover.
Euro begins to slide against the U.S. dollar
Immediately after the rate cut and new stimulus measures were announced, the euro fell 1.6% against the U.S. dollar to a 14-month low of $1.29, in-line with the ECB's implicit goal of weakening the currency.
The EUR/USD Exchange Rate
German Bundesbank chief dissents from ECB policy announcement
Germany's central bank chief and Angela Merkel ally Jens Weidmann has publically opposed and voted against the European Central Bank's new stimulus program. According to the Wall Street Journal’s source, he would have “preferred to wait to gauge the effects of four-year ECB loans to banks, due to start later this month, before taking new stimulus measures”. In addition, he expressed a belief that the Eurozone “inflation has bottomed out… and that after one or two months it should begin to gradually drift higher” and would avoid a deflationary spiral. Weighing the continued dissent from Germany, continental Europe’s largest economy, will undoubtedly be an important political factor for Draghi in deciding the extent of new asset purchases and ultimately when they will end.
Jens Weidmann, president of the Deutsche Bundesbank. (Photo Credit: Bloomberg News)
More European reliance on the ECB and waiting until October for more details on QE
While the market and those closely following monetary policy will have to wait until October's ECB meeting to find out more details about the new asset purchase program, it is clear that hopes for Europe’s economy will be increasingly relying on it together with the new ECB lending program as it is unlikely that structural fiscal reforms from individual countries will be arriving in the near future.
|
ef48f45294fae4e74b7c0757b00d346e | https://www.forbes.com/sites/jonhartley/2014/09/15/social-impact-bonds-are-going-mainstream/ | Social Impact Bonds Are Going Mainstream | Social Impact Bonds Are Going Mainstream
Now making waves in public finance circles are social impact bonds (SIBs). The bipartisan funding concept is a type of “Pay For Success” model where private investors invest capital and manage public projects, usually aimed at improving social outcomes for at-risk individuals, with the goal of reducing government spending in the long-term. Some social impact bonds seek to reduce the prison population through funding rehabilitation and employment programs for first-time offenders with the ultimate goal of reducing recidivism rates. Other SIBs seek to reduce the number of children in foster care. The catch is that private investors front all the costs and will be paid back a financial return by the government if and only if social outcomes are improved based on some standard measurement. The profit-motivating component comes from the fact that some of the savings from reduced costs for the government can be used to pay back the investor contingent upon their success. Now, Congress is considering the bipartisan Social Impact Bond Act, legislation that will enable the U.S. federal government to allocate $300 million to SIBs. A House Committee on Ways and Means hearing discussing the merits of social impact bonds led by the two co-sponsors of the bill, Rep. Todd Young (R-IN) and Rep. John Delaney (D-MD), was held last week.
Rep. Todd Young (R-IN), one of the co-sponsors of the bipartisan Social Impact Bond Act (Photo credit: TheStateHouseFile.com)
Key endorsements from economists, policy experts, and financiers
U.S. think tanks across the political spectrum, ranging from the Center For American Progress on the left to the American Enterprise Institute, have supported SIBs. Similarly, Harvard economist Larry Summers and hedge fund manager Bill Ackman have both supported SIBs and have invested their own capital in them. Major financial institutions like Goldman Sachs and Bank of America Merrill Lynch have also begun funding Social Impact Bonds.
Social impact bond mechanics
To cite an example to illustrate how a social impact bond works, the Adolescent Behavioral Learning Experience (ABLE) Program, the first social impact bond in the United States, was launched while I was at Goldman Sachs in partnership with the City of New York and Bloomberg Philanthropies, investing $10 million to reduce recidivism at Riker’s Island in New York City. The group partnered with MDRC, an intermediary who oversees the day-to-day implementation of the project and manages the non-profit service providers who deliver the intervention. Goldman Sachs receives its capital back only if the re-admission rate, as measured by total jail days avoided, is reduced by 10% or more, and should the reduction go beyond 11%, Goldman Sachs receives a financial return that’s magnitude is correlated with the reduction in the re-incarceration rate and associated savings to the government.
Rikers Island Social Impact Bond Payment Schedule
Source: Goldman Sachs
Bank of America Merrill Lynch has followed suit and has now launched its own Social Impact Bond, which is open to investors in its own private wealth channel.
Social impact bond intermediaries, who play an essential part in overseeing and managing the social impact bond project, are now proliferating, all competing for Social Impact Bond contracts through government launched Request For Proposals (RFPs). Social Finance, a non-porofit who structured and managed the first Social Impact Bond ever in the U.K., has become the pre-eminent Social Impact Bond intermediary worldwide. Third Sector Capital Partners, the Harvard Social Impact Bond Technical Assistance Lab, Private Capital For Public Good, and Finance For Good are other major players in the SIB intermediary space.
Social Impact Bond Mechanics Diagram
Source: Jie Bao
Proven success for social impact bonds in the U.K.
In September 2010, the first ever SIB was launched in the UK. Approximately £5 million was invested by private individuals and charities is being used to pay for interventions for offenders discharged after serving short prison sentences (less than 12 months) at HMP Peterborough, a prison in eastern England.
Years after the intervention was implemented, the results are now in, which speak very favorably to the efficacy of SIBs. Before the pilot, for every 100 prisoners released from Peterborough there were 159 reconviction events annually. Under the social impact bond intervention scheme, this figure has fallen to 141 — a significant fall of 11% in the recidivism rate for the affected group. That figure is relative to a 10% rise in the national U.K. recidivism rate over the same period (the RAND institute, who was commissioned to evaluate the success of the program, has an excellent detailed report on the pilot program).
With the positive results from Peterborough now in, the U.K. has taken worldwide leadership in further developing SIB programs nationally, with 15 SIB programs now in place across the country. Going beyond the U.K., 10 SIB programs have been announced across the U.S., Canada, Belgium, the Netherlands, Germany, and Australia, with many more being considered by local and state governments. The number of projects in the U.S. will expand considerably further if the Social Impact Bond Act is passed, granting $300 million to SIB projects.
Social Impact Bonds launched in the United Kingdom
(Photo Credit: Emma Tomkinson)
Social Impact Bonds (SIBs) launched outside the United Kingdom
(Photo Credit: Emma Tomkinson)
A market-oriented funding solution to government expenditures catches on
At a time when government finances worldwide are becoming stretched with high debt burdens and fiscal reforms aimed at reductions in government spending, there is growing interest in finding new ways to fund public services in a more cost effective manner. Social impact bonds are demonstrating that they can do exactly that.
Hypothetical Social Impact Bond Cost Diagram
Source: Social Finance US
|
22ad2dca89941f9c6ea127f25ef7e7cb | https://www.forbes.com/sites/jonhartley/2014/11/02/bank-of-japan-announces-more-quantitative-easing-the-next-chapter-in-abenomics/ | Bank Of Japan Announces More Quantitative Easing: The Next Chapter In Abenomics | Bank Of Japan Announces More Quantitative Easing: The Next Chapter In Abenomics
Last week, as the Federal Reserve officially announced the end of its long-term asset purchase program (commonly known as QE3), the Bank of Japan significantly ratcheted up its own quantitative easing program, in a surprising 5-4 split decision. Starting next year, the Bank of Japan will increase its balance sheet by 15 percent of GDP per annum and will extend the average duration of its bond purchases from 7 years to 10 years. The big move by Japan’s central bank comes amid the country’s GDP declining by 7.1% in the second quarter of 2014 (on an annualized basis) from the previous quarter following the increase of the VAT sales tax from 5% to 8% in Japan earlier this year and worries that Japan could fall into another deflationary spiral or fail to reach its stated goal of 2% inflation by 2014 (likely to be reformulated as 2% by 2015). The Government Pension Investment Fund (GPIF), Japan’s $1.1 trillion government pension fund, simultaneously announced its intentions to increase its overall equity holdings from 24% to 50% reduce its domestic bond holdings from 60% to 35%.
Bank of Japan Governor Haruhiko Kuroda speaks to the press in Tokyo on Friday (Photo Credit: Agence... [+] France-Presse/Getty Images).
Bank of Japan expands its balance sheet further while the Federal ends its own asset purchases
The combined effect of both the expanded BOJ asset purchases and GPIF shift in asset allocation to equities is to introduce a new type of QE on an enormous scale. Relative to the size of Japan's economy, the new asset purchase program is far larger than anything attempted by the other major central banks. On the news of enhanced monetary stimulus measures announced by the BOJ, the Nikkei stock index rose 5% on the news to its highest level since late 2007. The yen also fell to its lowest level in 7 years following the announcement.
Total Central Bank Assets (% of GDP)
Source: Financial Times, IMF, Haver Analytics, Fulcrum Asset Management LLP
On the heels of the enhanced quantitative easing announced by the Bank of Japan, the yen has fallen to new lows relative to the U.S. dollar, a decline which began following the Bank of Japan announcing its initial round of quantitative easing 2012. The yen has also lost a third of its value relative to the euro since reaching its peak value.
Yen/USD Exchange Rate
Source: FRED
The scholarly debate on the effectiveness of quantitative easing is just beginning
The Bank of Japan also said it would triple its purchases of exchange-traded funds (ETFs) and real-estate investment trusts (REITs) and buy longer-dated debt. While the European Central Bank (ECB) continues to avoid sovereign bond purchases due to legal questions posed by Germany, they continue to favor covered bonds and asset backed securities (ABS) as part of its monetary stimulus program. On the other hand, the Bank of Japan efforts look quite different in terms of the composition of its asset purchases and the size of them. In 2012, some of the BOJ’s asset purchases included not just JGBs (Japanese sovereign bonds) but also Nikkei ETFs. The central bank recently ramped up its ETF purchases, buying a combined ¥92.4 billion ($904.2 million) in Nikkei ETFs over the first six business days of August, the BOJ's longest and largest consecutive buying streak since it started purchasing ETFs in December 2010. Professor Willem Buiter of the London School of Economics has proposed a terminology to distinguish “quantitative easing”, or an expansion of a central bank's balance sheet, from what he terms “qualitative easing”, or the process of a central bank adding riskier assets to its balance sheet like the Bank of Japan has with the purchase of Nikkei ETFs. One could argue that the Bank of Japan's previous failed effort to defeat deflation with quantitative easing in the five years leading up to 2006 is a motivating factor in ratcheting up quantitative easing to avoid a repeated deflationary spiral.
Liquidity injections by Central Banks (% of World GDP, 12 month change)
Source: Financial Times, IMF, Haver Analytics, Fulcrum Asset Management LLP
As the Federal Reserve has ended its own asset purchase program, it may consider selling some of its assets as it begins to plan its “lift-off” date for raising the federal funds rate above its near-zero target. The debate surrounding to what extent quantitative easing programs have been effective at providing liquidity, reducing borrowing costs, and encouraging lending rather than just inflating stock prices will ensue for many years to come. As the great experiment in unconventional monetary policy continues, Japan will give us many more data points.
|
42f771fda7eb956e893a649b163ac9a0 | https://www.forbes.com/sites/jonhartley/2015/10/14/canadian-prime-minister-stephen-harpers-pro-growth-record-on-taxes-spending-and-regulation/ | Canadian Prime Minister Stephen Harper's Pro-Growth Record On Taxes, Spending And Regulation | Canadian Prime Minister Stephen Harper's Pro-Growth Record On Taxes, Spending And Regulation
Conservative Canadian Prime Minister Stephen Harper faces a tough battle ahead of the imminent October 19 election in Canada with polls putting him in serious competition with M.P. Justin Trudeau, the leader of the Liberal Party of Canada.
What many Canadian voters should remember at the polls is Harper's stunningly effective record of bringing market reforms to a country historically known for its big government liberalism and universal health care system.
Canadian Prime Minister Stephen Harper at the 2010 World Economic Forum Annual Meeting in Davos... [+] (Photo Credit: Remy Steinegger)
The Harper government has shown serious commitment to tax reform in Canada
While the U.S. has maintained the highest corporate tax rate in the OECD at 35%, the Harper government lowered the Canadian federal corporate tax rate to 15% in 2012 down originally from 28% since taking office in 2006. The tax policy even encouraged companies like Burger King to reincorporate in Canada, bring jobs and tax revenue with them.
In fact, a recent KPMG Report, Focus on Tax, ranked Canada as the #1 country with the most business-friendly tax structure among developed countries when adding up a wide range of tax costs to businesses from statutory labor costs to harmonized sales tax. When comparing developed countries to what companies pay in the U.S.; Canada came in at 53.6%, the U.K. came in at 66.6%, and the Netherlands at 74.5% of the U.S. corporate tax burden.
On the personal income tax level, the Harper government has reduced taxes for families, introduced Tax Free Savings Accounts (Roth-like retirement investment vehicles where contributions can grow tax free), and lowered the country's high federal VAT tax from 7% to 5% (which is applied on top of each provincial VAT tax such as Ontario's 8%).
The Harper government's commitment to keeping taxes low in every domain has extended to their popular commitment to stop the "Netflix Tax," a proposed sales tax on the streaming service's subscription fees, where Stephen Harper in a recent video defended Netflix's societal contribution and spoke to his love of television series like Breaking Bad.
The Harper government has demonstrated significant fiscal restraint when it comes to spending
This year, under the leadership of conservative finance minister Joe Oliver, the Canadian government balanced the budget for the first time and posted the country's first surplus since 2007.
The country's $1.4 surplus is a particularly laudable achievement in the wake of declining Canadian economic activity and tax revenues resulting from falling oil prices (Canada is a net oil exporter).
The 2015 election will ultimately determine the future path of government spending in Canada as Liberal Party leader Justin Trudeau has promised tens of billions of dollars in increased government spending that would mathematically run deficits of up to $10 billion per year for the next three years.
While the New Democratic Party's Leader Thomas Mulcair promises to maintain balanced budgets, his party, the third largest in Canada, promises both an even greater amount of spending than the Liberals which would be accompanied by astronomical tax rates.
The only hope for continued balanced budgets, reasonable tax rates and fiscal responsibility would be another Harper term.
The Harper government has demonstrated leadership on regulatory reform particularly when it comes to the energy industry
The Harper government has been consistently vocal on regulatory issues like approving the Keystone XL pipeline, arguing that holding back its development prevents both Canadian and American jobs from being created.
In fact, Gary Doer, Harper's ambassador the U.S., wrote a scathing letter earlier this year to Secretary of State John Kerry, in which he criticized the Environmental Protection Agency’s view that the Keystone XL pipeline would contribute to increased greenhouse gas emissions from the Canadian oil sands in Alberta, saying “one is left with the conclusion that there has been significant distortion and omission to arrive at the EPA’s conclusions.”
This is just one example of the Canadian conservative government's leadership on tackling regulations that have held back economic growth in Canada and abroad.
As Stephen Harper approaches his 10th year as Canadian prime minister and some reasonably suggest that he should allow another M.P. to take up the party leader mantle following the election, Harper has demonstrated serious commitment to free-market reform in Canada on taxes, spending and regulation. The rest of the world could use more Stephen Harpers when it comes to economic policy.
Also on Forbes:
Gallery: The World's Most Reputable Countries, 2014 10 images View gallery
Al
|
7b340c29e40409aa04d0a474c3809536 | https://www.forbes.com/sites/jonhartley/2015/11/02/congress-averts-another-standoff-by-raising-debt-ceiling-in-bipartisan-agreement-tackling-gridlock/ | Congress Averts Another Standoff By Raising Debt Ceiling In Bipartisan Agreement, Tackling Gridlock | Congress Averts Another Standoff By Raising Debt Ceiling In Bipartisan Agreement, Tackling Gridlock
Sen. Majority Leader Mitch McConnell (R-KY) lauded Congress’ bipartisan effort to avert another... [+] default or potential downgrade on its debt through passing a bill in both houses that raises the debt ceiling through mid-March 2017, well into the next Presidency. (AP Photo/Alex Brandon)
The past week has been marked by major events in U.S. politics including the third Republican presidential debate on CNBC and Rep. Paul Ryan being named the 54th Speaker of the House. One news item receiving much less attention is Congress’ bipartisan effort to avert another default or potential downgrade on its debt through passing a bill in both houses that raises the debt ceiling through mid-March 2017, well into the next Presidency. Averting a scenario similar to in 2011 when the U.S. came close to default following discord and U.S. sovereign debt was downgraded by S&P creating substantial financial market volatility, congressional leaders from both parties reached an agreement with the White House to increase spending by $80 billion through September 2017 and increase the federal borrowing limit until mid-March 2017.
Sen. Majority Leader Mitch McConnell (R-KY) lauded that the agreement “rejects tax hikes, secures long-term savings through entitlement reforms, and provides increased support for our military.” Following Rep. John Boehner’s final promise as outgoing Speaker of the House to avert default with a clean house bill, Senate Minority Leader Harry Reid (D-NV), offered rare praise saying that “I have to admit that I was skeptical that he wanted to clean out the barn before he left…But he found a way.”
U.S. Policy Uncertainty Remains Elevated Ahead of December Continuing Resolution Vote
While the bipartisan agreement to lift the debt ceiling through 2017 and make long-term commitments to entitlement reform is certainly a move in the right direction in mitigating policy uncertainty and avoiding another U.S. debt downgrade scenario, there are still some questions around what may happen around the upcoming December vote to pass appropriations legislation (a new continuing resolution) that would fund the government. Failure to reach such an agreement could result in another government shutdown similar to the 3 week shutdown in 2013.
Davis, Baker, and Bloom U.S. Economic Policy Uncertainty Index
Source: www.policyuncertainty.com Baker, Bloom and Davis (2012)
The Cost of Regulatory Policy Uncertainty, Regulatory Reform and the Code of Federal Regulations (CFR)
University of Chicago professor Steve Davis has an interesting new paper out detailing the sheer size of the Code of Federal Regulations (CFR), which compiles all federal regulations in effect each year. He notes that the CFR “grew nearly eight-fold over the past 55 years, reflecting tremendous growth in the scale and complexity of federal regulations”. For the sake of comparison, Davis also notes that “At 175,000 pages, the CFR contains as many words as 130 copies of the King James Bible”.
Indeed, the sheer complexity of the regulatory system can hamper economic growth. Indeed, Scott Baker, Nicholas Bloom and Steve Davis find in a recent NBER working paper that “policy uncertainty raises stock price volatility and reduces investment and employment in policy-sensitive sectors like defense, healthcare, and infrastructure construction.
Many policymakers, intellectuals and presidential candidates like Jeb Bush and Marco Rubio have proposed introducing a regulatory budget, an accounting tool that would put the economic costs of regulations into the federal budget process. Currently, the economic costs of regulations are not formally considered in the federal budget process, in contrast to government taxation and government spending. As a considerable amount of evidence there are economic costs to various regulation, it is sensible to estimate the distortionary effects of regulations and limit them only to when they are absolutely necessary.
The United Kingdom has been operating under a regulatory budget since 2011, and Canada began implementing its own version earlier this year as an initiative of the outgoing conservative Harper government. In the U.S., it will ultimately be up to the next President to put such accounting into effect that takes into account the economic burden of the gargantuan 175,000 pages of regulations contained in the CFR and begin the process of serious regulatory reform.
|
b27e798e208cd9bd2fd1235a2539453d | https://www.forbes.com/sites/jonhartley/2016/02/29/falling-productivity-underlying-slow-gdp-growth-and-how-monetary-policy-became-the-only-game-in-town/ | How 'The New Normal' Of Economic Growth Took Shape | How 'The New Normal' Of Economic Growth Took Shape
This past weekend in Shanghai, G-20 finance ministers and central bank governors convened their annual meeting to discuss the prospects for global growth and the potential for a coordinated policy response, which was met with mixed emotions and great uncertainty. With the revised annual 2015 U.S. real GDP growth estimates coming in at nearly 2% and the growing uncertainty of spillovers from slowing growth in emerging market economies like China, OECD economic forecasters predict continued tepid growth of 2-3% growth for the U.S. and many of the other developed market nations.
The phenomenon of slow secular GDP growth ranging from 2-2.5% (below the long-term trend of 3%) in the wake of Great Recession is a continuing pattern--one which has drawn the scholarly attention of many. Two new scholarly books, one by former PIMCO CEO Mohammed El-Erian and another by Northwestern macroeconomist Bob Gordon, attempt to pinpoint the underlying drivers behind “the new normal” of slow growth. Both books explaining the recent malaise have gained accolades and have reached the New York Times Bestseller list in recent weeks.
Mohammed El-Erian warns us about how monetary policy has become “The Only Game In Town”, encouraging a pivot toward fiscal policy and structural reforms in a more balanced economic policy response to “the new normal.”
The Only Game In Town by El-Erian is an adept description of the history of central banking from its origins to the new tools of unconventional monetary policy being used by the world’s major central banks, and how global economies need to start pivoting away from over-reliance on monetary policy to a more balanced set of economy policies that also includes fiscal policy and structural reforms.
Building on When Markets Collide, which won the FT Best Business Book of the Year Award in 2008, the former CEO of PIMCO reminds us of his prowess for diagnosing trends in financial markets and global economies. Indeed, it was El-Erian who coined the now ubiquitous phrase “The New Normal” to describe the prolonged slow growth recovery. Now El-Erian has sought to delineate the history of central banking and detail its current state in the world.
The central thesis of El-Erian’s new book is that in a world where debt-financed fiscal policy has nearly hit its limit in the major developed economies for political or economic reasons, monetary policy has become “the only game in town” as a policy measure used to stimulate economic growth which has several short-run benefits for the economy and but long-term costs to financial stability.
In El-Erian’s judgment, unconventional monetary policy measures such as quantitative easing long-term asset purchases have been a helpful tool to reduce unemployment when ailing economies were at their bottom, but many of the world developed economies have since turned a corner. The point is that economic policymakers should have a balanced, coordinated policy response, something absent up to this point. Many global economies could significantly benefit from structural reforms and rebuilding infrastructure, decisions that can only be made by political office holders. At the same time, over-reliance on permanently sustained asset purchases could potentially cause serious distortions in financial markets at the risk of threatening financial stability.
Northwestern economist Bob Gordon paints a pessimistic future of long-run U.S. growth, pointing to slowed productivity growth, a trend which initially began in the 1970s.
Gordon’s new book, The Rise and Fall of American Growth, offers a deeper explanation for the underlying mechanics behind slowed economic growth. The book has become widely popular within academic economics circles because of its controversial thesis that the recent period of slow growth is not just temporary but instead a potentially permanent trend that is symptomatic of the decline in U.S. productivity, a long-term trend that began in 1970.
Gordon argues that U.S. GDP growth peaked together with total factor productivity (TFP) growth between 1870 and 1970, a 100-year span which saw rapid technological innovation from the automobile to the telephone to the light bulb filament. Controversially, Gordon regards these technological improvements occurring during that time frame as superior to the productivity benefits delivered by the computer age.
The recent decline in GDP growth is a direct result of missing innovations that are on the same magnitude, a thesis which Gordon began to expound in a 2012 paper “Is U.S. Growth Over?,” reiterated in a 2015 paper “Secular Stagnation on the Supply Side” and his new 762-page tome, which Nobel Prize Laureate Robert Shiller recently hailed as Gordon’s magnum opus.
Real GDP Per Capital Growth (1300-2010)
Google chief economist Hal Varian argues the U.S. does not have a productivity problem but instead a productivity measurement problem in its GDP accounting methodology.
While Gordon offers us a pessimistic view on the long-term trajectory of GDP growth at the behest of falling productivity, Google chief economist Hal Varian offers us an optimistic view that instead we are not measuring productivity correctly in our national accounts.
Chiefly, Varian points out the current GDP accounting methodology understates the rapidly growing contributions of software and various internet applications to consumer surplus despite the fact that many of these items are free.
In short, much of this has to do with the history of GDP accounting. GDP was conceived in the 1930s, when economists were concerned with how to measure technological improvements in the production of steel and grain.
Properly accounting for 21st century technological improvements and timesaving apps is a more difficult task. One that may hopefully be sorted out by Department of Labor statisticians in the years to come.
|
025a2f0bac365f9eee9d50d84a96a2e5 | https://www.forbes.com/sites/jonhartley/2017/03/31/how-european-regulators-are-hindering-the-feds-ability-to-raise-interest-rates/ | How European Regulators Are Hindering The Fed's Ability To Raise Interest Rates | How European Regulators Are Hindering The Fed's Ability To Raise Interest Rates
Federal Reserve Chair Janet Yellen testifies during a Senate Banking Committee hearing on her... [+] nomination in November 2013. Reuters/Joshua Roberts
As the Federal Reserve raised it's target range for the federal funds rate by 0.25% this month (the third Fed rate hike since the Great Recession), something often overlooked is the degree to which the effective federal funds rate is able to stay within it's target 0.25% band (currently between 0.75% and 1%).
In particular, the effective fed funds rate has been incredibly volatile at quarter-end since the implementation of the Fed's overnight reverse repo facility which is now used to raise interest rates (as the Fed's balance sheet has increased the supply of reserves beyond where supply and demand meet and now uses the reverse repo rate as a rate floor).
European bank regulators in many respects are responsible for this quarter-end fed funds rate anomaly since their specific implementation Basel III capital requirement measurements are totally based upon quarter-end bank balance sheets figures (different from the approach taken by U.S. regulators who use daily averaging to compute leverage ratios). Effectively, this encourages European banks to engage in "window dressing". In short, window dressing is the process by which European banks dramatically shift the composition of their balance sheet at month end to appear safer, while moving back into riskier assets at other times.
As a consequence of the European quarter-end system, European bank capital floods the US cash markets as part of window dressing balance sheets (on a quarter end date like today) and as a result, U.S. money market rates see a substantial drop on quarter end dates.
The federal funds rate drops at month end and quarter end as European banking capital floods the U.S., in large part a result of window dressing
Effective federal funds rate since the launch of the Fed's overnight reverse repo facility. FRED/Federal Reserve Bank of St. Louis
Source: NYFed FR 2420 Jon Hartley
In addition, even the reverse repo facility's take-up is impacted on quarter-end days, seeing enormous spikes in the amount of assets being put into the facility in month end.
Assets in the Federal Reserve's overnight reverse repo facility spike at quarter end
Assets in the Federal Reserve's overnight reverse repo faciliity FRED/Federal Reserve Bank of St. Louis
The European quarter-end bank capital regulatory framework seriously impacts the federal funds rate on quarter ends and hinders financial stability in Europe
With the Fed's balance sheet reaching nearly $4.5 trillion and the money supply. The Fed's overnight reverse repo facility was initially conceived as the tool to do the job with some uncertainty as to whether it would be an appropriate floor on interest rates versus the rate offered on IOR (interest on reserves).
To give more detail, the reverse repo facility (launched in 2013) offers a secured collateralized lending rate (the lower end of the federal funds rate band) where institutional investors can park money overnight and earn a return. Banks exclusively can access the alternative IOR rate which makes up the higher end of the federal funds rate target band, the reverse repo facility is primarily accessed by money market funds and some dealers.
As a result the federal funds rate (an unsecured dollar money market rate) bounces around in between the IOR rate and the reverse repo rate.
While it's completely clear now that the reverse repo facility has been successful in setting a floor for the federal funds rate, this quarter-end window dressing issue is one small hiccup. For instance, the federal funds rate actually fell below the floor for the Fed's target range (the reverse repo rate) at the end of January 2016.
It's clear that U.S. subsidiaries of European banks become especially active in using U.S. money markets including using the Fed's reverse repo facility at quarter-end. One can argue that month-end rebalancing also plays a role in the demand for money market assets on month-ends.
Fortunately, FOMC meetings are always scheduled in the middle of the month and the month-end and quarter-end effects are short-lived for only one day.
While it still bears a nuisance for Fed's ability to set interest rate policy in the U.S., there's no doubt that the window dressing phenomenon also creates financial stability issues for Europe. As the solvency of European banks has been a constant concern since the European sovereign debt crisis, regulators moving to daily averaging of regulatory capital measures should be a no-brainer to deter window dressing and ensure that European banks are adhering to capital standards throughout the entire year.
European regulators have recently made some minor improvements, including moving to using a trailing average of quarter-end dates for its capital standards, it is still not even close to being sufficient.
As of today, the fed funds futures market is pricing in roughly two more hikes this year. For the sake of helping monetary policymakers to have more control over the fed funds rate as they continue to normalize policy and improving European financial stability, European bank capital regulators should really get serious about adopting daily averaging.
|
464c9f2b85b69b067aeec7c74ed5612e | https://www.forbes.com/sites/jonisweet/2020/05/22/travel-increases-happiness-study-finds/?sh=54b523ff6793 | Travel And Exploration Spark Happiness, Study Suggests | Travel And Exploration Spark Happiness, Study Suggests
New research finds a link between visiting new places and enhanced happiness. Getty
Ever wonder why racking up those passport stamps makes you feel so invigorated?
Or why you’ve been feeling so down while stuck at home during the pandemic?
A new study has found a previously unknown link between filling your days with diverse, novel experiences—anything from exploring a new neighborhood in your city to taking a cross-country road trip—and enhanced happiness and wellbeing.
What’s more, the research shows that there may be other ways you can boost your bliss when travel is out of the question. Here’s what the science says about the connection between joy and exploration.
The relationship between travel and happiness
For a study published by the journal Nature Neuroscience on May 18, 2020, researchers from New York University, Columbia University and the University of Miami collected data on 132 people in New York City and Miami for three to four months prior to the COVID-19 pandemic. The participants, which included 90 women and 42 men, ranged in age from 18 to 31 years old.
Participants first went to the laboratory to complete baseline assessments that measured depression and anxiety. Then, researchers asked them to install a geolocation-tracking app on their smartphones and respond to questionnaires about their moods via text messages throughout the study. After the mobile-tracking period, the participants returned to the laboratory to repeat the initial depression and anxiety questionnaires.
MORE FOR YOUTravel Check: This Is The World On CovidA Flight Just Set A Record For Positive Covid-19 Cases — Here’s Why That Will Not Happen In The U.S.France Will Allow Vaccinated American Travelers To Visit With ‘A Special Pass’
After analyzing the data, researchers found that people tended have more positive emotions, such as "happy," "excited," "strong," “relaxed" and “attentive,” when they visited a variety of places in a day and spent roughly equal proportions of time in those destinations.
Toward the end of the research, about half of the participants also underwent MRI scans so the researchers could see if the connection between exploration and positive emotions had any relationship to the activity within the brain.
The MRI results demonstrated that a strong association between positive emotions and diverse experiences correlated with activity in the hippocampus and the striatum—the parts of the brain that process novelty and reward. The research echoed the findings of previous studies that found similar results in animals.
"These results suggest a reciprocal link between the novel and diverse experiences we have during our daily exploration of our physical environments and our subjective sense of wellbeing," said Catherine Hartley, an assistant professor in New York University's Department of Psychology and one of the paper's co-authors, via a statement on the research.
Mood might motivate us to explore
Switching up your scenery can lead to a better sense of wellbeing, science shows. Getty
The report seems to indicate that that the novel and diverse experiences people have when they explore may lead to enhanced happiness—a finding that rings true for anyone who loves to travel.
However, the data may also indicate the reverse—that people are more motivated to fill their days with new, rich experiences and a variety of locations when they’re in better moods.
“Collectively, these findings show the beneficial consequences of environmental enrichment across species, demonstrating a connection between real-world exposure to fresh and varied experiences and increases in positive emotions,” said co-author Aaron Heller, an assistant professor in the University of Miami’s Department of Psychology.
Getting the bottom of whether happiness drives us to see new places or whether diverse experiences create a state of joy will require more research, the authors say. However, the findings show an undeniable connection between exploration and feeling our best.
How to get the benefits of exploration during the pandemic
Discover fresh new experiences to feel better during the pandemic. Getty
With coronavirus prevention efforts limiting movement for many people around the world, is it impossible to reap the psychological benefits of traveling right now? Not necessarily, the researchers say.
The results of the study suggest that even small changes in our daily environments and physical and mental routines may provide similar beneficial effects of exploration.
You don’t necessarily need to hop on an airplane and immerse yourself in a different culture to enhance your happiness. Simply switching up your exercise routine, like skipping your morning run in favor of doing yoga at home, or taking a different route to the supermarket or drugstore may improve your emotional state. You could also pick up a new hobby in a different space. Strum a ukulele on your porch, strap on a pair of rollerblades and zoom through the park or set up a tent for a night of backyard camping.
The life-affirming effects of travel are waiting for you when all of this is over. In the meantime, the more you can break up the monotony of the day with fresh, interesting experiences and safe exploration of new environments, the better you may feel.
|
1f8b225335a086e88fb7745cd5359d66 | https://www.forbes.com/sites/jonisweet/2020/09/26/how-hotels-are-helping-get-out-the-vote-in-the-2020-election/ | How Hotels Are Helping Get Out The Vote In The 2020 Election | How Hotels Are Helping Get Out The Vote In The 2020 Election
Hotels are helping travelers get ready for the 2020 election. getty
When you think of hotels, you probably imagine snuggling into a freshly made bed piled high with pillows, indulging in room service and letting your worries melt away. But securing the future of our democracy? Yep, you can now add that to the list, too.
Hotels around the country are empowering guests take action this election through creative programs and packages. These hotels don’t care if you’re Democrat, Republican, Independent, or of any other political persuasion—they just want to make sure all their guests have the tools and info they need to cast their ballot. And some are even offering special incentives to guests who show up wearing their “I voted” stickers.
Take a look at how hotels are helping get out the vote in the 2020 general election.
Hotel Figueroa's Gran Sala ballroom will become a polling place. Josh Telles
Hotel Figueroa
Amid the 100th anniversary of the ratification of the 19th Amendment, Hotel Figueroa in Los Angeles has decided to become an official L.A. County voting center from Oct. 30-Nov. 3. The progressive hotel was built by women, for women, nearly a century go, so getting involved in the 2020 election felt like another way to stay true to its feminist roots. Voters will be able to cast their ballots at Hotel Figueroa’s Gran Sala ballroom, a spectacular 2,100-square-foot space with arched windows, ironwork chandeliers and the building’s original limestone fireplace. Who knew a polling place could be this posh?
Extended Stay America has turned its hotels into virtual voter registration centers. Extended Stay America
MORE FOR YOUPhotos: Egypt’s 3,400-Year-Old ‘Lost Golden City’ Is Unearthed From Desert SandsHawaii Travel Restrictions Have Been UpdatedThe Promise Of International Travel: April EU Travel Restrictions, Covid-19 Test Requirements, Quarantine By Country
Extended Stay America
Extended Stay America is making it easier than ever for guests to get ready for the election. Each of its 561 company-owned hotels, scattered throughout 41 states, are virtual voter registration centers through the brand’s “Stay Counted” campaign. The hotel has trained its associates to serve as voter registration ambassadors who can answer questions about the process. What’s more, Extended Stay America is also giving guests who live in states that don’t offer online voter registration access to printers and stamped envelopes to mail in their applications.
Hamilton Hotel's Suffrage Suite pays tribute to women's right to vote. Hamilton Hotel
Hamilton Hotel
Hamilton Hotel in Washington, D.C. has already been paying tribute to women’s right to vote through its one-of-a-kind Suffrage Suite, unveiled a couple months ago. The history exhibit-style hotel room is themed around the 19th Amendment, complete with framed copies of Woman's Journal and Suffrage News (a women's rights periodical), vintage photos of women casting ballots in 1917, and stories of change-makers, like Susan B. Anthony, Sojourner Truth, and the late Supreme Court Justice Ruth Bader Ginsburg. But now, the hotel is making sure all its guests are plugged into our democracy through iPads that connect to When We All Vote’s voter registration page, located in every room. The landing page gives guests info on registering to vote online or in person, along with key dates and resources for every state.
Guests can pick up election-themed yard signs from Bunkhouse's Austin Motel. Bunkhouse
Bunkhouse
The design- and music-focused boutique hotel brand Bunkhouse wants to make sure members of its community have all the info they need to prepare for the upcoming election. It recently rolled out a one-stop shop of online voting resources anyone can use to see if they’re registered to vote, check out state-by-state absentee ballot deadlines and early voting calendars, find info on polling place locations (and the documents you may need to bring with you on Election Day) and more. If you happen to be in Austin, Texas, on Oct. 3-4, you can also swing by Bunkhouse’s cafe, Jo’s Coffee, for an in-person voter registration drive and a complimentary Topo Chico. Score!
InterContinental Miami
Located near multiple early voting sites, InterContinental Miami is leveraging its strategic location to encourage Miami-Dade locals to vote with a special staycation package. It’s treating guests who vote ahead of Election Day to free early check-in and exclusive amenities, like access to a South Beach IHG partner hotel and beach club, two beach lounge chairs, an umbrella and on-site activities and classes. The “Early Voting, Early Check-in Election Staycation” package, which starts at $185 per night, is available Oct. 19 to Nov. 30.
Look out for a 300-pound chocolate "Vote!" sculpture at Hard Rock Hotel & Casino Atlantic City. Getty Images
Hard Rock Hotel & Casino Atlantic City
The Hard Rock Hotel & Casino Atlantic City is reminding guests about the upcoming election in the sweetest way possible: a giant chocolate “Vote!” sculpture. The massive dessert, set to go on display in October, will clock in at 300 pounds and stand 4 feet tall and 5 feet wide. The brains behind the display is executive pastry chef Thaddeus DuBois, who was the executive pastry chef at The White House under President George W. Bush’s administration. DuBois, plus three other team members, will spend approximately 80 hours crafting the eye-catching reminder to participate in the upcoming election.
Virgin Hotels is helping guests register to vote right through their website. © 2015 Bloomberg Finance LP
Virgin Hotels
Virgin Hotels is reminding guests that it can be as easy to prepare for the election as it is to book a hotel room with a new “Register to Vote” button at the top of their website. The link takes guests directly to Vote.gov, where they can choose their state and see detailed guidance on how to register to vote, mail-in and in-person registration deadlines, and other important info to get ready for Nov. 3.
Hyatt Centric Las Olas Fort Lauderdale
Proof of voting comes with special perks at the Hyatt Centric Las Olas Fort Lauderdale, Las Olas’ first new hotel in more than 70 years. The hotel’s “Exercise Your Right & Stay The Night” staycation package, exclusively for guests who have participated in the election, gets you a second night for $59 (in honor of the 59th presidential election). Plus, you’ll score your first drink at the on-site Harborwood Urban Kitchen & Bar for just $5.90. Swing by Harborwood on Election Night for whiskey specials—a nod to President George Washington, who distilled his own whiskey and even “plied voters with booze.”
|
d98a53fc7175ba4424248a003d8ee0e5 | https://www.forbes.com/sites/jonisweet/2021/12/28/11-skincare-myths-you-should-stop-believing-according-to-dermatologists/?sh=b744b233611a | 11 Skincare Myths You Should Stop Believing, According To Dermatologists | 11 Skincare Myths You Should Stop Believing, According To Dermatologists
Here's the truth behind 11 skincare myths. getty
Everyone wants a clear, radiant complexion. But getting there might depend more on your ability to discern fact from fiction than how diligently you follow a cleansing regimen or how much you spend on products. The fact is that a lot of skincare advice just doesn’t work, and some skincare myths can even cause harm.
Let’s get to the truth behind common skincare tips. Here are 11 skincare myths you should stop believing, according to top dermatologists from across the country.
Myth: Drinking water keeps your skin hydrated.
Fact: “There is no evidence that drinking more or less water is helpful or harmful to your skin. While drinking more water can be beneficial for other health conditions, water does not automatically get absorbed by your skin when you drink it. It hydrates our cells as it is absorbed by the bloodstream and filtered by the kidneys, which does help hydrate our bodies overall. However, if you are severely dehydrated, that will obviously take a toll on your skin, as well as the rest of your body. The best way to keep your skin hydrated is to avoid dry air (or use a humidifier), use a gentle cleanser, and use a moisturizer daily or ingredients that help keep moisture locked in your skin barrier, like hyaluronic acid,” said Dr. Howard Sobel, attending dermatologist and dermatologic surgeon at Lenox Hill Hospital in New York and founder of Sobel Skin.
Myth: Not washing your face causes acne.
Fact: “Hygiene doesn't play a role in the development of acne. Acne involves oil production, bacteria, clogged pores, and inflammation, with hormones and stress playing a significant role and (to a lesser extent) diet for some people. Not washing your face doesn't help your situation but it certainly doesn't cause acne,” said Dr. Peterson Pierre, dermatologist and founder of the Pierre Skin Care Institute in Westlake Village, California.
Skincare myth: Spray tans offer protection from sunburn. getty
Myth: A spray tan protects your skin from sunburn.
Fact: “A spray tan simply changes the color of your skin, nothing more. There is no UV protection in having darker skin cells. Like the mythical ‘base tan,’ many people still mistakenly believe that a spray tan acts as a shield against sunburn. In a way, I think it’s worse because you won’t easily see your skin turning red as it sunburns,” said Dr. Stuart Kaplan, a Beverly Hills dermatologist and founder of the skincare line Kaplan MD.
MORE FOR YOUSaudi Crown Prince MBS Pressed The Louvre To Lie About His Fake Leonardo Da Vinci, Per New DocumentaryThe 27 Most Active Volcanoes In The World And What Could Erupt NextMatt Gaetz’s Travel Records Don’t Do What He Claims They Do
Myth: You need to exfoliate your skin.
Fact: “A common skincare myth that I hear all the time is that you need to exfoliate. Your skin naturally sheds its superficial keratinocytes about once per month. You do not need to buy exfoliators, or undergo peels, facials, or dermabrasion to exfoliate. And you definitely do not need to use anything abrasive on your skin to achieve this, as it happens on its own,” said Dr. Anna H. Chacon, dermatologist on the advisory board of Smart Style Today.
Myth: Natural, botanical skincare products are better for your skin.
Fact: “One of the biggest myths I encounter in dermatology and medicine in general is that natural and organic products are safer. Natural-based skincare products are often unregulated and tend to contain botanicals and essential oils that can lead to significant allergic contact dermatitis in some people. I always give the example of organic poison ivy or snake venom—just because it comes from nature doesn't mean it's safe or not toxic,” said Dr. Susan Bard, dermatologist at Vive Dermatology Surgery & Aesthetics in Brooklyn, New York.
Skincare myth: Eye creams don't do anything. getty
Myth: Eye creams don’t do anything.
Fact: “An eye cream can have many benefits if it contains the right ingredients and is formulated for your specific skin concerns. If your concern is dark circles or puffiness due to fatigue, an eye cream with caffeine can definitely help control inflammation and make your under eyes appear brighter. However, caffeine alone won’t do the trick, and it should be combined with smoothing, hydrating, and brightening ingredients, such as hyaluronic acid, vitamin C, and retinol, as the combination can eliminate fine lines, wrinkles, and dark circles, and help give the skin an overall youthful appearance,” said Dr. Sobel.
Myth: Wounds need to breathe to heal.
Fact: “There is good evidence to support that wounds should be covered and kept moist with products like petroleum jelly (or Vaseline) in order to heal. Letting a wound dry out will create a crust and this may actually impede wound healing and worsen the appearance of the final scar. Keeping a wound covered will also help protect it from infection,” said Dr. Juliya Fisher, dermatologist at JUVA Skin and Laser Center in Manhattan.
Myth: You don’t need a retinol until age 50.
Fact: “Retinol has often been referred to as the ‘gold standard of skincare’ and will continue to stay that way in the new year. Using a retinol can increase collagen production and skin cell turnover, help treat acne, unclog pores, minimize fine lines and wrinkles, and even out skin tone. It is ideal to start using retinol in your mid-late 20s in order to prevent damage from occurring. After all, it’s easier to prevent a wrinkle than get rid of one! Start introducing retinol slowly into your routine about two or three times a week. After a few weeks, you can start using it almost every day,” said Dr. Sobel.
Skincare myth: You only need sunscreen in the summer. getty
Myth: There’s no need for sunscreen in the fall or winter.
Fact: “Many people think you only need sunscreen in the summer, but this is a myth. The ultraviolet (UV) rays that cause sunburn are not as strong in the winter, but they are always present. The UV rays that cause fine lines, wrinkles, and skin hyperpigmentation are present year round and it’s important to apply sunscreen throughout the year. UV rays even penetrate clouds, so you should plan on wearing sunscreen every day if you plan on being outdoors,” said Dr. Debra Jaliman, assistant professor of dermatology at Icahn School of Medicine at Mount Sinai in New York and author of the book, “Skin Rules: Trade Secrets from a Top New York Dermatologist.”
Myth: Toners are a necessary part of an acne skincare regimen.
Fact: “Acne-prone people are often looking for products to combat their oily skin. Toners are touted as a way to cleanse the skin of excess oil after washing. However, washing with a gentle cleanser and water is adequate to thoroughly cleanse the face. You do not need to skin to be 100-percent squeaky clean and stripped of all its natural oils. Historically, toners were often formulated with alcohols, which produce drying effects that compromise the skin and cause free-radical damage. Toners that contain alpha and beta hydroxy acids can work to exfoliate the skin and potentially minimize acne breakouts, but these ingredients are often already included in acne washes,” said Dr. Donna Hart, dermatologist at Westlake Dermatology in Cedar Park, Texas.
Myth: Exfoliating devices should be used every day.
Fact: “Exfoliating devices can be a good addition to your skincare routine, but the models with spinning brush heads can be overused and actually be irritating. I had a patient that used one daily to help with oily complexion and acne, only to have it cause his skin to become extra dry and irritated. Devices like these should be used only a couple of times per week for some people and others can skip them completely, but I would not recommend you using them every day consistently,” said Dr. Todd Minars, dermatologist at Minars Dermatology in Hollywood, Florida, and assistant clinical professor of dermatology at the University of Miami School of Medicine.
|
5e722f1a263ba91e5982b4b82d73a443 | https://www.forbes.com/sites/jonmarino/2020/04/28/tech-giants-lag-behind-zoom-in-video-chat-wars/ | Tech Giants Lag Behind Zoom In Video Chat Wars | Tech Giants Lag Behind Zoom In Video Chat Wars
CARACAS, VENEZUELA - APRIL 23: Venezuelan Karateka Antonio Jose Diaz Fernandez says goodbye to his ... [+] students after giving his karate class from his dojo during COVID-19 lockdown on April 23, 2020 in Caracas, Venezuela. Unable to receive students, he decided to broadcast lessons for his students through Zoom. Diaz Fernandez has been dominating the kata division scene in the last 20 years. He is training to qualify for Tokyo Olympics, which he announced will be his last international competition. (Photo by Leonardo Fernandez Viloria/Getty Images) Getty Images
From game night to work calls to funerals and weddings, the entire world has been forced to substitute videonetworking for in-person exchanges, and tech companies large and small are jockeying for a bigger piece of your screen time.
In terms of its stock performance, Zoom, the San Jose-based video chat company, is out to a big lead; shares have more than doubled in a 2020 that proved perilous for most market indices. However, a growing roster of global investment banks have begun directing staffers to other video services, citing privacy issues. Zoombombing, where unsolicited users hack their way into an unsuspecting feed to drop offensive or inappropriate content, has become a challenge to the company’s reputation. Bigger rivals sense the time is now to double-down on competing products.
Zoom headcount rose nearly 20% to begin the year as it staffed up to satisfy unprecedented ... [+] engagement. Thinknum Alternative Data
In spite of privacy concerns, Zoom is drawing high ratings from users in the Google Play and Apple Store and scaling up staff in anticipation of further demand. Thinknum Alternative Data reflects headcount, tracked through its LinkedIn community rose nearly 20% this year. Job postings at Zoom sharply rose as the pandemic forced more people indoors, although hiring appears to have tapered off more recently.
The complaints and accusations being lobbed at Zoom aren’t unique to its chat platform - recently, cyber researchers identified a flaw in Microsoft Teams that exposed user accounts to hackers.
MORE FOR YOUBiden Will Seek To Nearly Double Capital Gains Tax For Wealthy, According To ReportAre Hedge Funds Predicting A Stock Market Crash?Bernie Sanders Introduces New Wall Street Tax To Fund Free College Tuition
And data reflects Teams is Microsoft’s bet on its future place in telework, despite a very big deal to take over the top brand in the space nearly nine years ago.
Microsoft Teams' networking and collaboration software is displacing its Skype as a business ... [+] services product. Thinknum Alternative Data
Microsoft Teams’ apps are seeing downloads increase as more businesses nudge staffers onto a trusted tech platform’s service since March, where Apple Store Ratings (tracked above in our chart) rose about 84%.
Microsoft is one of the legacy tech companies that is dedicating its efforts to enterprise clients - and it doesn’t want S&P 500 leaders confusing its services for a product with a different brand, especially at a time when privacy and security are going at a premium. It’s apparent in how many people it is channeling to Skype for Business, which it advertises as being a product for 250 or less, compared to Teams’ capacity for 10,000. Skype for Business application, a separate platform, has only earned a few thousand new ratings in the Apple Store, compared to more than 200,000 new reviews for Teams - it looks as if there’s no Skype in Teams, at least for CEO Satya Nadella.
Facebook's play at the social end of the videonetworking marketplace gives it an opportunity to ... [+] pitch its services to an enormous existing user base. Thinknum Alternative Data
It’s impossible to ignore Zoom’s trajectory, or Teams’ triumph - especially if you’re Mark Zuckerberg. Facebook is launching Messenger Rooms, which can host up to 50 people, according to a Wall Street Journal report, and geared more toward social gatherings than professional networking. It isn’t Messenger’s first advance into Zoom’s social marketplace - Thinknum Alternative Data tracks its Messenger Kids app, which has seen a 15% rise in ratings submitted through the Google Play platform.
And, on the major app platforms, the Kids app boasts a rating higher than 4.2-out-of-5, signaling user satisfaction.
Plenty of legacy tech players are lunging for the business videoconferencing marketplace; Zuckerberg’s move to accommodate people in their recreational time may be an opportunity to build other key lines of the company’s business, as well.
As the likelihood grows that more Americans, in particular, will face staggered back-to-work schedules - particularly for office workers, who are among those most vulnerable to COVID-19 in enclosed environments - videonetworking has the potential to become a booming business for companies that can sell enterprise licenses. But user satisfaction, whether it’s for a publicly-listed business, or just margaritas with Mom, could wind up dictating market winners and losers. In terms of stock price and happy customers, Zoom is on top right now, with average ratings higher than 4.4 in both Google and Apple platforms.
|
f0da80fa52b5ee56188ae086cd31d0ca | https://www.forbes.com/sites/jonmarkman/2016/06/13/facebook-microsoft-and-the-cloud-under-the-sea/ | Facebook, Microsoft And The Cloud Under The Sea | Facebook, Microsoft And The Cloud Under The Sea
Cloud computing is vital to the growth of Facebook and Microsoft . So recently they announced they’re teaming up to build a 4,100 mile, transatlantic underwater cable between Europe and U.S. eastern seaboard.
You probably think of cloud computing as space-age, cutting edge. The idea that in this day and age powerful data centers are still linked by miles of bulky cable seems anathema. Yet these cables are vital. The proposed 160 terabit per second cable will connect massive data centers in northern Virginia to the northern coast of Spain in Bilbao. From there, Facebook and Microsoft will build junction points with new cables spider-webbing to Africa, the Middle East and Asia, where they will connect with other massive data centers.
image courtesy Techradar
Cloud computing, despite its connotation, is hopelessly stuck to the ground with a web of cables, data centers and technology service providers. Satellites are "neither fast nor good enough," industry analyst Alan Mauldin of TeleGeography told USA Today."There's a huge delay." In contrast, cables provide near-instantaneous reactions, are more robust and cheaper. As technology companies increasingly become international, "the cloud is under the ocean," the analyst said.
Facebook runs a quintessential cloud business. Its network is device agnostic, robust and fast. People in Mumbai sifting through their news feed and munching aloo parathasdemand the same low latency experience as San Francisco night-clubbers posting pictures to Instagram. Facebook’s secret is proximity. It keeps its data centers close to its customers because latency increases as data moves away from its endpoint. And with Facebook’s public ambitions to grow the business into the developing world, it needs its data centers closer to the masses in Africa, the Middle East and Asia.
Gallery: Giants Of Cloud Storage 7 images View gallery
Microsoft is looking for the same type of reach for its cloud services Bing, Xbox, Office and the Azure public cloud. Securing a super fast data connection between hubs in the US and Europe means multinational corporate clients can provide better user experiences overseas.
And it is a big, varied business. Research outfit Gartner projects the public cloud computing sector is expected to grow to$204 billion in 2016, up 16.5% from 2015. To put the size and fragmentation of the sector in perspective, Amazon Web Services is the leader and it’s projecting about $10 billion in sales for 2016. That leaves plenty of space in the market for Azure, IBM and Alphabet to grow their nascent public cloud businesses.
There is also lots of opportunity for the lesser known companies that own, operate and maintain data centers, and IT service providers like Level 3 Communications that run the cable to from those centers. In fact, Level 3, having swallowed most of the fiber optic cable glut of the 1990’s – including Global Crossing, a firm that made transatlantic cables -- may be uniquely positioned because of its 350 facility footprint.
While the Microsoft-Facebook cable, called Marea, is only about the width of an ordinary garden hose, its capacity is massive, at 160 terabits per second. Currently there are about 337 terabits of potential capacity across the Atlantic. When the Marea cable comes online in 2017, “this one cable will be able to do almost half of what all the cables do” combined, Mauldin said.
There are some challenges. Last year the European Union Court of Justice overturned the Safe Harbor agreement that allowed US firms to transfer data between the US and Europe unrestricted. Its messy replacement, Privacy Shield, gives each of the 28 EU member countries the facility to individually negotiate data transfer agreements with US companies.
Cloud computing hardly conjures up the image of cables snaking along the ocean floor yet this type of infrastructure is just as important as the fancy user facing software. The Achilles heel of the cloud is latency. Cables, data centers and IT service providers fix latency. The sector is young and ripe with many investment opportunities in all parts of the value chain.
To learn more about the fascinating world of undersea telecom cables, including why sharks like to gnaw on them, click here.
|
8500fc59576f9951ed016393a8f99b95 | https://www.forbes.com/sites/jonmarkman/2016/08/27/uber-chooses-self-driving-cars-now/ | Uber Chooses Self-Driving Cars Now | Uber Chooses Self-Driving Cars Now
A Silicon Valley software business is launching self-driving taxis later this month. It has no choice. In the new world order of software there are two choices: Quick or dead.
Uber’s deal-making chief executive Travis Kalanick has always been impatient. He began
spreading the seed of his ride-hailing startup around the globe long before early efforts flowered. He didn’t wait for regulators’ blessings or the grievances from traditional cabbies to be resolved. That’s the nature of software development: Get something out there, grab mindshare, fix the messy bits later in real time. Or die.
A picture taken on January 18, 2015 shows Travis Kalanick, co-founder of the US transportation... [+] network company Uber, speaking during the opening of the Digital Life Design (DLD) Conference in Munich, southern Germany. The car-sharing start-up Uber can create as many as 50,000 jobs in Europe this year as part of a 'new partnership' with European cities, its chief executive told at the conference in Munich. AFP PHOTO / DPA / TOBIAS HASE +++ GERMANY OUT (Photo credit should read TOBIAS HASE/AFP/Getty Images)
So last week Uber announced several sensor laden, self-driving Volvo XC 90 SUVs would hit the crowded streets of Pittsburgh. That is five years earlier than a similar plan fromFord (F) and at least three years ahead of Tesla’s (TSLA) master plan. Most important, it uproots the plans of Alphabet (GOOGL), the company Uber fears most.
Alphabet saw promise in Uber way back in 2013 and invested aggressively through Google Ventures. It seateda Googler on the Board and convinced the startup to use its Maps software. Then things changed. As Bloomberg reports “The minute it was clear to us that our friends in Mountain View were going to be getting in the ride-sharing space, we needed to make sure there is an alternative [self-driving car],” says Kalanick. “Because if there is not, we’re not going to have any business.” Developing an autonomous vehicle, he adds, “is basically existential for us.”
In July Uber announced it would invest $500 million in maps development. Not coincidentally, it also began secretly talking to Otto, a company started by ex-Googlers Anthony Levandowski, Lior Ron, Claire Delaunay and crew of Tesla and Apple (AAPL) vets. Otto currently has 91 employees and builds proprietary LiDAR hardware and computer software usedwith existing freight trucks to make them autonomous. Uberformalized the acquisition of Otto on Thursday in a deal that could be worth $680 million.
The deal brings a slew of the best minds in the field of autonomous vehicles to Uber. Prior to Otto, Levandowski was one of the original engineers at theGoogle self-driving car program. Ron spent five years heading Google Maps and Delaunay was the Robotics lead. They will join more than 50 researchers hired away from the Carnegie Mellon University robotics program when Uber set-up shop in Pittsburgh last year.
There is an economic incentive to the push for self-driving cars sooner than later. Current ride-hailing fleets have average consumer costs of $1.30 per mile. Kalanick says eliminating the driver will see mobility costs fall so fast that travel by ride-hailing services will soon be significantly cheaper than owning a private car.
Consider the contrast: Automakers see the existential threat and they are scrambling to grab fleet sales to maintain their hardware business. Uber sees a high margin software service business built around mobility. That falls in line with its skill set. Given the rewards, it’s pushing the auto industry in that direction at a startling pace.
There is another incentive. The nature of the software business is winner take most. Uber understands it can’t afford to stop moving forward until all of its software competitors have been vanquished. Uber is private so there is no investment opportunity there. Among component makers, high-end imaging chip manufacturer Nvidia (NVDA) is the best bet on pullbacks.
|
a360bf0f0f76de074b09dec8dd936d09 | https://www.forbes.com/sites/jonmarkman/2018/06/28/never-bet-against-the-ambitions-of-elon-musk/ | Never Bet Against The Ambitions Of Elon Musk | Never Bet Against The Ambitions Of Elon Musk
Tesla is set to lay off 9% of its 40,000 employees. But Elon Musk doesn’t seem to be worried about the 425,000 orders that are currently backlogged. That’s because there’s an even more ambitious goal he wants to hit.
Musk says existing Teslas will begin the transition to full, self-driving vehicles in August.
This evolution should not surprise to Musk followers. He has been teasing full-blown autonomy for years.
CHICAGO, IL - JUNE 14: Engineer and tech entrepreneur Elon Musk of The Boring Company listens as... [+] Chicago Mayor Rahm Emanuel talks about constructing a high speed transit tunnel at Block 37 during a news conference on June 14, 2018 in Chicago, Illinois. Musk said he could create a 16-passenger vehicle to operate on a high-speed rail system that could get travelers to and from downtown Chicago and O'hare International Airport under twenty minutes, at speeds of over 100 miles per hour. (Photo by Joshua Lott/Getty Images)
But it should be a lesson for stock traders. Don’t bet against smart people. And learn to see the bigger picture.
However, when it comes to Musk’s historically lofty ambitions, the usual disclaimers apply …
There is no guarantee Musk will meet his summer deadline. In his storied career, one of the unfortunate mainstays of the South African-born billionaire is his penchant for missing targets.
He dreams big … fiddles with details … and often comes up short on promises.
Bearish investors have seized on this shortcoming.
They characterize him as a charlatan, an “all hat, no cattle” figure who has duped the investing class with wide-eyed dreams. Missed deadlines and production targets aside, the naysayers could not be more wrong on this front.
Musk is both a gifted storyteller and an engineer. The two are not mutually exclusive.
He was an early CEO of PayPal (PYPL), the secure online payment system, at the vanguard of the ecommerce movement. In 2002, he founded the aerospace company SpaceX. In 2003, he brought Tesla to life, a revolutionary electric car company. In 2006, he was instrumental in the creation of Solar City, a leading solar panel and battery development company.
In his spare time, Musk came up with an idea for Hyperloop, a vacuum-sealed tube capable of carrying passengers in magnetic levitating pods at 760 mph … a rocket system that would reduce travel time to one hour between any major city in world … and a plan to colonize Mars.
These are all really big ideas. They are easy to dismiss as impossible.
Except, SpaceX already has rockets that land vertically after carrying their payload to space. And Richard Branson, the British billionaire behind the Virgin companies, has a Hyperloop project ready for testing.
Musk’s ideas are not impossible. They’re visionary.
Getting cars to full-on autonomy will happen. And Teslas have already demonstrated advanced self-driving capabilities. It’s just that most people assumed this was years — perhaps decades — away.
I’m not suggesting that investors buy Tesla stock …
On the contrary, I believe Musk’s history of unfulfilled promises is a detriment to shareholders. Misses, even for big ideas, bring untenable volatility.
However, betting against Musk is short-sighted.
Tesla stock has record short interest. Those shares have been borrowed. Eventually they must be returned.
At current share prices, most bearish traders are losing money. And further good news from the company will spark more short-covering — which will bring the bears even more pain.
Musk knows this. He’s smart.
Also, investors should not lose sight of the bigger picture …
Self-driving cars are much closer than most people suspect. It is a game-changing development. It also a big opportunity for investors.
Aptiv PLC is an offshoot of Delphi Automotive, the former General Motors subsidiary. The company now holds all the autonomous car bits and pieces that have been acquired throughout the years.
In 2015 Delphi bought Ottomatika, a self-driving car software startup. That business was spun out from Carnegie Mellon, a Pittsburgh school considered to be one of the premier robotics engineering institutions in the world.
In October 2017, Delphi acquired nuTonomy, another maker of self-driving software. It’s also the manager of a fleet of Singapore autonomous taxis.
Aptiv is building a significant advantage over its competition.
In addition to its software prowess, and the data the Singapore taxi fleet brings, the company has longstanding relationships with automakers.
It’s a foot in the door that should lead to new deals as traditional car companies play catch-up with Tesla.
In the interim, Aptiv’s current business is firing on all cylinders. In May, first-quarter sales surged 8% to $3.6 billion. Kevin Clark, the chief executive officer, noted strong gains in active safety, infotainment and vehicle electrification.
Shares reached a record high last week at $102.46. The price-to-earnings ratio is only 20.5x, and the market capitalization is $27 billion.
While that is the high end of similar companies, it is not expensive given the race toward full autonomy.
Skip Tesla, but buy Aptiv into weakness.
|
bf6ec284841942c417eb207f72d0cd40 | https://www.forbes.com/sites/jonmarkman/2019/04/28/amazon-rolls-a-trojan-horse-into-rival-retailers-stores/ | Amazon Rolls A Trojan Horse Into Rival Retailers' Stores | Amazon Rolls A Trojan Horse Into Rival Retailers' Stores
Amazon.com is making another huge power play in retail. And yet, nobody is paying attention.
Like most department store chains, Kohl's managers are rethinking their business model. They are closing and right-sizing stores. They're looking for new ways to generate store traffic.
Wisconsin-based Kohl's began accepting Amazon.com-returned items at select stores in 2017. Now it plans to sell more Amazon Echo Dots, Fire Sticks and tablets, too.
Amazon CEO Jeff Bezos laughs during an interview Tuesday, Oct. 6, 2009, in Cupertino, Calif.... [+] Amazon.com Inc. is cutting the price of its Kindle electronic-book reader yet again and launching an international version, in hopes of spurring more sales and keeping it ahead of a growing field of competitors. (AP Photo/Ben Margot) ASSOCIATED PRESS
The company is being praised for hopping into the new digital era. But its decision to work, rather than compete, with Amazon doesn't make it a "buy."
Amazon.com is a formidable foe in retail. We are tracking an open gain of 179% in our Shockwave portfolio, and there's plenty more upside ahead.
The company owns a vast cloud computing network with bespoke software to run its ecommerce, logistics and enormous supply chain.
This alone is a huge competitive advantage.
And that business, Amazon Web Services, is a fast-growing entity in its own right. Sales surged 41% to $7.7 billion in the first quarter of 2019.
It also has a 100-million-strong army of loyal customers. They like its services so much, they are willing to pay an estimated $6.4 billion in subscription fees just to be members … and to surrender their data, to boot.
The Kohl's deal is remarkable because Amazon.com is not a friend. It's the worst kind of competitor. It's a company less worried about profitability and more consumed with killing the competition.
I'm reminded of the deals Google struck with publishers to display their content for free in search. After several years, the powers-that-be at the New York Times, Wall Street Journal and Washington Post realized free content only really benefited Google.
Michelle Gass, CEO at Kohl's, is making the rounds in the financial press spinning a different narrative. In a recent interview with CNBC, she tells the company story of leasing space to Planet Fitness gyms inside 10 of the company's cavernous stores, and new deals with WW, the Weight Watchers rebrand, for wellness centers.
But she lights up when she speaks of Amazon. Not only will Kohl's take back returned, unpackaged items, it will also use its logistics networks to get the gear back into Amazon warehouses. Easy returns is one of the benefits of being a brick-and-mortar store, she says.
Well, that's true. Customers would much prefer to drop off items at stores than package and ship them.
Accenture, the global consulting firm, found that routing customers to drop-off locations can save companies millions in labor and fuel costs, according to a report in USA Today.
Returned items is Amazon's Achilles' heel. This is the one part of retailing it doesn't do well.
In an effort to increase foot traffic in Kohl's stores, Gass is forfeiting a key advantage, and she's providing a platform to sell more Amazon-branded electronics, too.
It's crazy. It makes no sense. All the longer-term benefits accrue to Amazon, not Kohl's. It's a huge power play, and it's not being reflected in the price of Amazon shares.
The stock has rebounded to the $1,850 level after trading all the way back through $1,400 in December. Shares trade at 46.2x forward earnings, although profitability has never been a key metric for this online retailer.
Amazon logged $232 billion in sales in fiscal 2018, a blistering 31% increase year-over-year. And there is no end in sight as the other aspects of its business, namely cloud and enterprise computing, hit full stride.
Based on sales growth alone, the stock could easily trade into the $2,400 level in 18 months.
|
e4fbcffa584429f428fdab90c3d002a8 | https://www.forbes.com/sites/jonmatonis/2012/04/02/watch-bitcoin-robbery-in-slow-motion/ | Watch Bitcoin Robbery in Slow Motion | Watch Bitcoin Robbery in Slow Motion
Bank robberies of the future may not reveal the traditional security camera shot of the ski-masked gun holder but rather we will watch them evolve slowly in front of our eyes as the money hops around the globe. It's not so much where is the money but when is the money? The public and transparent nature of the bitcoin transaction ledger ensures that all transactions are known by date, time, amount, and block number although not necessarily by the who or the where. Contrary to conventional opinion, this is not a negative for the protocol because bitcoin liberates cash by putting it online.
On March 1st, a total of 46,703 bitcoin worth $228,845 at the time was stolen from customer accounts at VPS hosting company Linode. As described in a Linode Security Incident Report:
This morning, an intruder accessed a web-based Linode customer service portal. Suspicious events prompted an immediate investigation and the compromised credentials used by this intruder were then restricted. All activity via the web portal is logged, and an exhaustive audit has provided the following: All activity by the intruder was limited to a total of eight customers, all of which had references to "bitcoin". The intruder proceeded to compromise those Linode Manager accounts, with the apparent goal of finding and transferring any bitcoins.
The victims were not exactly banks but, in the bitcoin world, they come pretty close to being banks because they hold significant quantities of deposited bitcoin for various purposes. Of course, we may never know who or how many individuals were involved in the heist, but that doesn't stop us from seeing how the loot was divvied up. The slow motion heist of bitcoin stored at Linode can be viewed by methodically clicking through web-based block chain information in a weird voyeuristic game of 'follow-the-money' (click on the dendrogram's orange circles to follow the money).
The 25,000 bitcoin in the real dendrogram example represent only a portion of the total 43,554 bitcoin stolen from leveraged trading house Bitcoinica that was transferred from servers at Linode to many IP addresses scattered around the world. Bitcoinica was by far the victim that suffered the greatest and admirably they have pledged to cover all losses on behalf of their customers which should give you an indication of their daily positive cash flow. Bitcoin mining pool Slush and the Bitcoin Faucet were two of the other theft victims.
Does bitcoin possess the property of fungibility? I believe it does through sufficient mixing, plausible offline transactions, and the absence of a software-enforcing address black list. Just as we don't examine that gold Krugerrand for who had previously held it, we don't do so with bitcoin. As some have commented in the community, obviously the lack of anonymity and lack of untraceability will lead us straight to the thief's doorstep. Famously, Fergal Reid and Martin Harrigan have even observed that "the actions of many users are far from anonymous" in their 2011 research paper "Bitcoin is Not Anonymous".
So then, has the thief been apprehended yet? Not exactly, but that is because public traceability does not always equate to real-world identity and therefore the transactions themselves are still reasonably anonymous. Reid and Harrigan state that they are not law enforcement officials and they don't really have subpoena power but that sloppy thieves can indeed leave a digital trail, like an unmasked static IP address or a known public key, that would link them to a real-world identity. In other words, anonymity is not built-in to the protocol as lead core bitcoin developer Gavin Andresen warns:
"Unless you are very careful in the way you use Bitcoin (and you have the technical know-how to use it with other anonymizing technologies like Tor or i2p), you should assume that a persistent, motivated attacker will be able to associate your IP address with your bitcoin transactions."
Andresen adds that multisignature capability is technically possible for bitcoin security purposes and it's on the horizon in one form or another. Bitcoin private keys stored on a "hot wallet" in the cloud are like physical paper banknotes left on your kitchen table and this really is an emerging policy and procedures issue for network security managers. Clearly, it's a whole new world for electronic money especially when that money comes with the powerful irreversibility of cash. But that's not a bug -- it's a feature.
Follow author on Twitter.
|
22db6c45e0b562df8c54ba6c948284fd | https://www.forbes.com/sites/jonmatonis/2012/05/07/bitcoin-funded-debit-cards/ | Bitcoin Funded Debit Cards | Bitcoin Funded Debit Cards
Yes, it's entirely possible to fund your existing debit card, or credit card, with your accumulated bitcoin. And I don't mean that you are shipped a generic, low-limit prepaid VISA or Mastercard from some anonymous reseller. I mean that you convert bitcoin online to dollars or euros and the funds are available to spend with a card that you are most likely already holding in your wallet.
Why is this so significant? It's important because it leverages a little-known type of transaction that is available on the VisaNet system called 'Original Credit Transaction'. The other major card payment networks have a similar feature too. These transactions act like a refund or credit transaction when you return an item to a store except that they don't have to be associated with an original purchase. Essentially, they enable your card to be a two-way payment device. Surprisingly, not many financial institutions have taken advantage of this feature yet but I expect that to change.
Visa Personal Payments, already offered by financial institutions outside the U.S., became available in the U.S. market last year marking the first time that a major payment network has introduced a global requirement for account issuers to accept incoming funds. It's the technology behind now-merged P2P service providers ZashPay and Popmoney.
Previously, it was cumbersome for bitcoin account holders to transact in national currencies because they had to go through one or more exchanges and then wait further for funds to arrive in a bank account or other intermediary like the formerly bitcoin-friendly Paxum. Now these personal payments are being offered by e-currency exchanges as a way to provide easy worldwide access to e-currency account balances most notably by AurumXchange. The digital currency exchange operated by Dominica-based Aurum Capital Holdings, Inc. supports bitcoin as well as Liberty Reserve, Pecunix, Perfect Money, and c-gold and they offer two choices for cashing out into a card-based product.
The first option is the Withdraw2Card service that does not require any sender identity verification. Requiring only the destination card number and expiration date (name and CVV code are not required), funds can be transferred to any credit or debit card in any country in the world. If the destination account currency is not dollars or euros then it will be converted to the native currency automatically. Service fee is $9 plus 1.99% (for MtGox USD) with a $1,000 maximum transfer amount and you should not send more than the credit card's limit. The bitcoin portion of the transaction is accomplished through the use of redeemable coupon codes from the popular bitcoin exchanges that act as digital bearer certificates. According to AurumXchange, they plan to offer direct two-way convertibility for bitcoin in the near future so you won't need the redeemable code.
This service is ideal for regions of the world where a large majority of the population may not have bank accounts or where international wires are cost-prohibitive. AurumXchange's General Manager Roberto Gutierrez explains, "The service so far has been tremendously popular. Just counting countries alone where people don't have access to bank accounts or foreign wires are highly taxed or scrutinized, such as Africa, Brazil and China to name a few, we have processed over 3,000 orders since we started a few weeks ago. North American and European customers have been using the service quite a lot as well especially for small transactions that would otherwise be too expensive to conduct through means such as international wire transfers."
The second option is the AurumXchange Premium Mastercard issued through North Carolina-based Four Oaks Bank which comes with instant funds availability. After a $24.99 two-year membership fee, the card will be shipped for free anywhere in the world via first class mail.
OKPAY is another interesting provider in the bitcoin debit card space. They offer the OKPAY Debit Card which is issued by CSC24Seven.com Limited, a financial institution licensed by the Central Bank of Cyprus to issue cards. Founded in 2007, OKPAY, Inc. is a subject of British Virgin Islands (BVI) regulations.
Now that they have completed their bitcoin integration into the OKPAY system, it is possible to fund your OKPAY account directly with bitcoin, withdraw via bitcoin, and use bitcoin as a payment option for purchases of goods and services. Although, they do not offer the Original Credit Transaction feature to any card, they will provide timely and direct conversion of bitcoin to their proprietary Mastercard product.
By removing friction from the process, bitcoin becomes easier to spend overall because not every merchant will accept bitcoin directly for payment yet and not all transactions demand irreversibility and privacy. Logically as a consumer, you may still want your VISA chargeback rights for certain purchases. The Original Credit Transaction is an excellent way to leverage the legacy card payment network to facilitate the growth of the bitcoin network and these two exchangers are in the vanguard.
Follow author on Twitter.
|
160cd97b925fc08e8c53133929b815ae | https://www.forbes.com/sites/jonmatonis/2012/05/12/the-somali-american-remittance-dilemma/ | The Somali American Remittance Dilemma | The Somali American Remittance Dilemma
Image courtesy of MIT Technology Review
By threatening to close their Wells Fargo and U.S. Bancorp accounts this week, a group representing Somali Americans has pushed the ongoing hawala remittance issue to a head. For months now, Somalis in Minnesota have been barred from making the small regular transfers to their family members in Somalia that they have been making for years.
According to American Banker, "Bank officials say they sympathize with the plight of the expatriates but that there is no clear way to process the payments comfortably within federal rules. The problem lies in Somalia's money services businesses. Remittance there is done through a loose network of MSBs known as hawalas. U.S.-based hawalas work with banks to wire the money to hawalas in Somalia." Since hawalas in Somalia are unregulated, the U.S. government worries that such intermediaries could assist in funding terrorism.
Unfortunately, it's not an isolated incident. This scenario is likely to happen more and more as onerous Bank Secrecy and USA Patriot Acts make it increasingly difficult for financial institutions to be in full compliance with anti-money laundering regulations. Instead of trying to comply, they are electing to opt out so as not to encounter heavy federal fines. It sure would be nice if the world had a decentralized peer-to-peer digital currency that could be transferred to mobile devices in a secure fashion.
Wait a minute! Doesn't bitcoin allow for rapid and trustworthy international value transfer? Isn't bitcoin fairly easy to obtain in the developed economies of North America and Europe? Doesn't Somalia have good telecommunications infrastructure supporting mobile phones?
Here's how the bitcoin money remittance process would work. A hard-working honest Somali American wishes to send the equivalent of $150 to his mother in Somalia so he purchases bitcoin at one of the many exchanges that accept cash deposits at banks for bitcoin. Alternatively, our would-be remitter could use the Bitcoin OTC (over-the-counter) exchange and arrange a person-to-person sale based on reputation history. Once the bitcoin is stored safely in the remitter's client wallet, he would ask the overseas recipient to generate a bitcoin receiving address using one of the many bitcoin wallet apps for Android. [Sorry but Apple's App Store is currently restricting bitcoin wallet apps with send or receive capability.]
After his mother in Somalia has received and confirmed the bitcoin transaction (approximately 10 minutes), she would be able to maintain the bitcoin balance or change it out into her local currency, the Somali shilling. Bitcoin exchangers are already springing up in many countries around the world including Brazil, Latvia, and Philippines. If it hasn't happened already, a savvy merchant in Somalia will start accepting bitcoin for Somali shillings. Or a traditional currency exchange dealer could get in on the action too -- the spreads are certainly there.
In September 2010, the mobile penetration rate in Somalia was estimated at 25.84% over a population estimate of 9.9 million. Since the financial flow would be principally in U.S. dollars to bitcoin to Somali shillings, several aggregators could make a market in bitcoin and then sell their bitcoin in the market to other intermediaries. All it takes is a few Somalia-based bitcoin outlets to open up their economy to the rest of the world economy.
As a distributed network, bitcoin possesses the capability to route around interference and disruption. In fact, this was a key design consideration as resiliency has grown to become an imperative for privacy-enhancing electronic cash. Its detractors remind me of the holy papacy being fearful of the printing press because it allowed for individual interpretation and diminished mankind's reliance on the anointed biblical teachers.
Follow author on Twitter.
|
0b1ea1433526fa837c95292e434ef38c | https://www.forbes.com/sites/jonmatonis/2012/07/12/kim-dotcoms-pretrial-legal-funds-would-be-safe-with-bitcoin/ | Kim Dotcom's Pretrial Legal Funds Would Be Safe With Bitcoin | Kim Dotcom's Pretrial Legal Funds Would Be Safe With Bitcoin
The Megaupload case may end up having a chilling effect on pretrial asset seizure. Yesterday Kim "Dotcom" Schmitz, founder of Megaupload, asked his Twitter followers for some better payment alternatives to credit cards and PayPal. The responses suggesting bitcoin came pouring in.
It's easy to see why he asked in the first place. After successfully launching Megaupload, Kim Dotcom's business enterprise was shut down by the FBI and his funds frozen over alleged copyright infringement, money laundering, and conspiracy. Also, PayPal has recently taken a stricter stance on file-hosting services due to piracy concerns. Kim Dotcom is launching a new online business, Megabox, in four to six months and he probably doesn't want to bother with the likes of PayPal.
However, there are two unique aspects of the bitcoin cryptocurrency for Kim Dotcom to consider -- an online payment method for customers and a reliable storage facility for his company's monetary assets.
On the first count, bitcoin could replace PayPal and credit cards which would increase the transactional privacy of his many loyal customers as well as dramatically reduce the processing fees that his company has undoubtedly been forking over to PayPal and credit card processors. At its peak, Megaupload served about 180 million users.
Now, since his extradition hearing has been delayed until 2013, Kim Dotcom has made the extraordinary offer to go to the United States voluntarily if he and his colleagues receive a fair trial and the unfreezing of his funds to pay legal bills and pretrial living expenses. The U.S. Department of Justice has already seized $67 million. With 22 lawyers working on the case in different countries, Kim Dotcom tells the New Zealand Herald, "I have accumulated millions of dollars in legal bills and I haven't been able to pay a single cent. They just want to hang me out to dry and wait until there is no support left."
This is where bitcoin, on the second count, would prove even more useful as funds retained on the distributed bitcoin block chain cannot be seized in any jurisdiction. As the holder of the private key, you and only you control access and dispensation of the bitcoin value. A distribution mechanism could be set up for Kim Dotcom to transfer a certain amount of bitcoin to a third party that would handle the payment of his legal fees in various national currencies. Or, his legal team could even accept bitcoin directly as payment for legal services rendered. If he establishes a brainwallet, he could even authorize the transfer from prison.
In a Skype interview with The Hollywood Reporter, Dotcom said, "My home was raided by 72 heavily armed police arriving in helicopters. This was an Osama bin Laden-style operation on an alleged copyright infringer. I guess it's pure luck that my family wasn't terminated by a Predator drone." Dotcom also believes that "dirty delay tactics instead of evidence" are being deployed by the U.S. Government and that "the [delaying] actions clearly demonstrate that they don't have a case and that this ... was about killing Megaupload and creating a chilling effect to freeze the whole file-hosting sector."
Ruling on June 29th, U.S. District Court Judge Liam O'Grady ordered that defendants could argue for a motion to dismiss the allegations against the company but seized assets would not be unfrozen to pay attorney costs due to the fact that defendants are currently challenging extradition abroad. After this saga unfolds and given the sad and overzealous trend in pretrial asset seizure, I expect many rainy day legal defense funds to be established in bitcoin.
Follow author on Twitter.
|
a865a85267cc1d076cff30a7265768ca | https://www.forbes.com/sites/jonmatonis/2012/07/19/5-essential-privacy-tools-for-the-next-crypto-war/ | 5 Essential Privacy Tools For The Next Crypto War | 5 Essential Privacy Tools For The Next Crypto War
The first crypto war revolved around the hardware-based Clipper Chip and coercing companies to deploy broken encryption with backdoors to enable domestic State spying. Fortunately, the good guys won.
The next crypto war is still a war of the government against its own citizens but this time enlisting the corporations, including social networks, as direct agents of the State. What some have dubbed Crypto Wars 2.0 manifests itself in the current litany of legislative acronyms designed to confuse and befuddle.
Sometimes I think legislative bills are named with a Twitter hashtag in mind. Although it doesn't always work out favorably for the name deciders, hashtags do generally assist in the coalescing of Internet organizers around the world. Since passage of the Cyber Intelligence Sharing and Protection Act by the U.S. House of Representatives in April, #CISPA has been everywhere. Thankfully, twin legislative initiatives SOPA and PIPA were dropped in January. Also, let's not forget the gradual expansion of CALEA and the Lieberman-Collins Cyber Security Act and the NSA-centric McCain Cybersecurity Act.
Even the seemingly unpatriotic USA PATRIOT Act of 2001 is a garbled backronym that would make George Orwell proud: Uniting (and) Strengthening America (by) Providing Appropriate Tools Required (to) Intercept (and) Obstruct Terrorism Act.
The Electronic Frontier Foundation recently posted an FAQ arguing that CISPA would allow companies to review and then to hand over customers' personal information, logs, and email to the government. That is a fairly broad and comprehensive mandate.
What has gone largely unnoticed in this torrent of analysis, however, is that privacy tools for individuals already exist and they have so for many years! Quietly anticipating encroachment against basic Internet liberties, concerned cyber privacy advocates has been coding and releasing the tools that allow for private electronic communication and private web surfing. Proposed legislation like CISPA may or may not pass and become law, but if it does we have to understand the new landscape. Your privacy is up to you!
1. Email Privacy - Naked email is like a postcard for anyone to read. Pretty Good Privacy (PGP), an open source software program created by Phil Zimmermann in 1991, is the global standard for point-to-point encrypted and authenticated email. Hushmail is an OpenPGP-compatible web-based email platform that does not have access to your user password for decryption. Both products, when used correctly, offer subpoena-proof email communication.
2. File Privacy - Your files might be stored in the encrypted cloud but that doesn't mean that they're 100% safe for your eyes only. Free and open-source TrueCrypt allows you to encrypt folders or entire drives locally prior to syncing with Dropbox. BoxCryptor also facilitates local file encryption prior to cloud uploading and it comes with added compatibility for Android and iOS.
There is an alternative to the dual-application process described above. Although most cloud-based storage services transfer over an encrypted session and store data in an encrypted form, the files are still accessible to the service provider which makes the data vulnerable to court-ordered subpoena. In order to rectify this, two different zero-knowledge data storage companies provide secure online data backup and syncing - SpiderOak and Wuala. For obvious reasons, there is no password recovery and employees have zero access to your data.
3. Voice Privacy - Wiretapping will become more prevalent in the days and months ahead. From the creator of PGP, Zfone is a new secure VoIP phone software product utilizing a protocol called ZRTP which lets you make encrypted phone calls over the Internet. The project's trademark is "whisper in someone's ear from a thousand miles away." You can listen to Zimmermann present Zfone at DEFCON 15.
Also utilizing ZRTP, open-source Jitsi provides secure video calls, conferencing, chat, and desktop sharing. Because of security issues and lawful interception, Tor Project’s Jacob Appelbaum recommends using Jitsi instead of Skype.
Designed specifically for mobile devices and utilizing ZRTP, open-source RedPhone from Whisper Systems is an application that enables encrypted voice communication between RedPhone users on Android.
4. Chat Privacy - Encrypting your chat or instant messaging sessions is just as important as encrypting your email. Cryptocat establishes a secure, encrypted chat session that is allegedly not subject to commercial or government surveillance. Similar to Cryptocat, the older and more durable Off-the-record Messaging (OTR) cryptographic protocol generates new key pairs for every chat implementing a form of perfect forward secrecy and deniable encryption. It is available via Pidgin plugin.
5. Traffic Privacy - The final step in the process is geo-privacy, which refers to the protection of 'information privacy' with regard to geographic information. Virtual Private Networks, or VPNs, have been used consistently for anonymous web browsing and IP address masking. Just make sure that your VPN provider does not log IP addresses and that they accept a form of payment that does not link you to the transaction.
Additionally, the Tor Project provides free software and an open network for privacy-oriented Internet usage. Intended to protect users' personal freedom, privacy, and ability to conduct confidential business, Tor (The onion router) is a system that improves online anonymity by routing Internet traffic through a worldwide volunteer network of layering and encrypting servers which impedes network surveillance or traffic analysis.
I encourage everyone to become familiar with these basic tools for privacy. The important disclaimer is that in order to circumvent these privacy technologies, your password can be obtained in a variety of ways that are extremely intrusive and beyond the realm of casual day-to-day usage, such as hardware keyloggers or ceiling-mounted cameras. Furthermore, browser-based cryptography carries the added risk of spoofed applets being delivered to your desktop by court order or by malicious actors but this risk can be mitigated by maintaining trusted source code locally or by verifying compiled code against a digital signature. The mission statement from Tor Project advocate and developer Jacob Appelbaum still stands, "Make the metadata worthless essentially for people that are surveilling you."
Follow author on Twitter. [UPDATE: I was previously affiliated with Hush Communications Corporation, the creator of Hushmail. This link further explains my stance on Hushmail strengths and weaknesses.]
|
a863b3f411ad034967e17e400a67ddab | https://www.forbes.com/sites/jonmatonis/2012/08/31/bitzino-and-the-dawn-of-provably-fair-casino-gaming/ | BitZino And The Dawn Of 'Provably Fair' Casino Gaming | BitZino And The Dawn Of 'Provably Fair' Casino Gaming
Have you ever wondered how easy it would be for online casino operators to cheat? After all, they're magically shuffling cards online and you can't even see the complete deck.
Formally launching on June 9th of this year, bitZino has designed a method to prove that its shuffles are fair and bitZino is not your typical online gambling portal. The first difference you realize is that gaming is conducted only in the digital currency bitcoin. The other primary difference is that bitZino displays a 'Provably Fair' button which allows you to independently and immediately verify the authenticity of a shuffle.
Now, the fact that they use bitcoin as the gaming currency has nothing to do with the cryptographic techniques of 'provably' fair' card shuffling but it does add a nice touch. BitZino is differentiating itself on two amazing levels and this is sure to cause the mega online casinos some heartburn down the road.
At the first level, bitcoin operates as the ideal digital casino chip providing privacy, immediacy, and irreversibility -- in essence, everything you'd expect from a physical Vegas casino chip. In addition to advantages for the online gaming experience, bitcoin doesn't respect national borders and there's no third-party processor that has to aggregate casino cash flow. Bitcoin assists in jurisdiction-less poker, because if they can't go after the crime, they go after the money trail.
At the second level, bitZino has boldly encroached upon an area that has been dominated by the third-party auditing associations. Lowering the barrier to entry, there is no more need for the auditing, certification, and standards organizations like eCOGRA (eCommerce and Online Gaming Regulation and Assurance) and APCW (Association of Players, Casinos, and Webmasters).
BitZino claims their games aren't just fair, they're 'provably fair' and the verifiable proof is available directly to you as a player. If not for the education issue, this news would stun the established online casinos of Gibraltar and Malta. I cannot imagine a gaming operator that doesn't adopt provably fair systems to remain competitive in the future.
Basically, bitZino is deploying a cryptographic hash function (SHA256 algorithm) to create a fingerprint of an already shuffled deck. Since the SHA256 hashing algorithm is one-way and there's no way a player can use that hash to figure out what the shuffle of the deck actually is, the casino can let players look at the hash before the game starts.
Then, the deck is reshuffled using the Fisher-Yates shuffle algorithm with the random numbers generated from the Mersenne twister algorithm that was seeded with a hash of the combined server seed and client seed. According to bitZino, "The second round of shuffling only serves to ensure that neither the server nor client could possibly know the final deck before the game starts." Finally, the initial shuffle and the server seed are provided to the player for verification. [BitZino is aware that some older browsers are not as secure as modern browsers deploying window.crypto and also that a client-side script to generate the client seed would drastically improve the quality of the overall system.]
The bitcoin community provides an excellent user base of cryptographically-aware players which increases the practical understanding of 'provably fair' systems that don't require a third-party authority. Larry Taad, owner and lead developer of bitZino, explains in an interview:
One of the largest hurdles to creating a good provably fair system is explaining to users exactly what it is. When developing our provably fair system at bitZino, we put a lot of effort into making sure we were able to accurately portray to our users how it all works. Because the larger market doesn't yet understand provably fair systems, it doesn't yet demand them. So the big players aren't likely to implement them. However, if history is any indication, the market will come around. Look at the rate of adoption of HTTPS websites. Users in the 90's didn't demand secure websites when shopping, now they absolutely do.
When asked about other types of casino games like craps and roulette, Larry said that any single-player game can be made provably fair by merely utilizing a source of randomness that is unknown to the house at the time the outcome of the game is determined.
For multi-player games, it becomes more complicated due to the fact that the house could plant a player that has full knowledge of the state of the game. Mental poker techniques can address some of these issues but with significant computational overhead which is why bitZino is working on ways to improve mental poker techniques. He added that "bitZino currently offers single-player video poker and single-player blackjack that are provably fair, but that multi-player games will be offered in the future."
As I write this article, they have officially added provably fair roulette. I really like this online casino -- expect a lot of good things from bitZino!
Follow author on Twitter.
|
486e05a15c3243a98f8d347e69876f67 | https://www.forbes.com/sites/jonmatonis/2012/09/12/key-disclosure-laws-can-be-used-to-confiscate-bitcoin-assets/ | Key Disclosure Laws Can Be Used To Confiscate Bitcoin Assets | Key Disclosure Laws Can Be Used To Confiscate Bitcoin Assets
Jail time for refusing to comply with mandatory key disclosure hasn't occurred in the United States yet. But, it's already happening in jurisdictions such as the UK, where a 33-year-old man was incarcerated for refusing to turn over his decryption keys and a youth was jailed for not disclosing a 50-character encryption password to authorities. Similarly harsh, key disclosure laws also exist in Australia and South Africa which compel individuals to surrender cryptographic keys to law enforcement without regard for the usual common law protection against self-incrimination.
Key disclosure laws may become the most important government tool in asset seizures and the war on money laundering. When charged with a criminal offense, that refers to the ability of the government to demand that you surrender your private encryption keys that decrypt your data. If your data is currency such as access control to various amounts of bitcoin on the block chain, then you have surrendered your financial transaction history and potentially the value itself.
These laws will impact not only money laundering prosecution but almost any asset protection strategy that attempts to maintain an element of financial privacy such as private banking or family trusts. Prior to all these money laundering laws being enacted, I once heard it said that the practice of moving money around was simply referred to as banking.
Doug Casey famously said that "it's a completely artificial crime. It wasn't even heard of 20 years ago, because the 'crime' didn't exist." Furthermore he said, "The War on Drugs may be where 'money laundering' originated as a crime, but today it has a lot more to do with something infinitely more important to the state: the War on Tax Evasion." And, if they can't track it from the outside via the banks and financial institutions, they'll track it from the inside via access to an individual's passwords and private keys.
In the United States, relevant case law has revolved around the Fifth Amendment privilege against self-incrimination as there is currently no specific law regarding key disclosure. The definition of a password is alarmingly broad too -- all the way from an extension of your personal memory to an illegitimate tool that only hides something tangible from law enforcement.
The first case to address directly the question of whether a person can be compelled to reveal his or her encryption keys or password was In re Grand Jury Subpoena to Sebastien Boucher in 2009. Here a magistrate judge ruled that producing the passphrase for the encrypted hard drive would constitute self-incrimination, but on appeal the District Court overturned that decision, holding that decrypting and producing the complete contents would not constitute self-incrimination since Boucher initially cooperated in showing some of the computer files to border agents.
Next, there was the federal criminal case of United States v. Fricosu in 2010 in which the Federal District Court ordered a criminal defendant to decrypt the contents of an encrypted laptop. Although the defendant claimed Fifth Amendment rights against self-incrimination and the Electronic Frontier Foundation (EFF) filed an amicus curiae brief, the Court sided with the government in ruling that since defendant admitted to ownership of the laptop and knowledge of the passwords in a recorded conversation, the existence of evidence was a "forgone conclusion" and therefore Fifth Amendment privilege could not be implicated. In early 2012, the Tenth Circuit Court of Appeals rejected an appeal and let that decision stand.
In a blog post, Orin Kerr cited In re Weiss (703 F. 2d 653) in summarizing testimonial obduracy and what a future Court's likely posture would be if defendant refuses to comply with a key disclosure order or claims to have forgotten the password. On the specific Fifth Amendment issue in United States v. Fricosu, Kerr states:
If I’m reading Fricosu correctly, the Court is not saying that there is no Fifth Amendment privilege against being forced to divulge a password. Rather, the Court is saying that the Fifth Amendment privilege can’t be asserted in a specific case where it is known based on the facts of the case that the computer belongs to the suspect and the suspect knows the password. Because the only incriminating message of being forced to decrypt the password — that the suspect has control over the computer — is already known, it is a “foregone conclusion” and the Fifth Amendment privilege cannot block the government’s application.
In another case upholding the constitutional right against forced decryption, the Eleventh Circuit Court of Appeals in United States v. Doe on February 24th, 2012 overturned a contempt of court ruling for refusing to decrypt. Arguing that without any specific knowledge of a hard drive's file contents or file existence, the government cannot assert that certain items can be described with "reasonable particularity" and therefore compelling a defendant to produce those files would violate the Fifth Amendment's protection against self-incrimination. The Electronic Frontier Foundation (EFF), which again filed an amicus curiae brief in the case, called it a major victory for constitutional rights in the digital age.
To say the cryptocurrency bitcoin is disruptive would be an understatement. Bitcoin not only disrupts payments and monetary sovereignty, it also disrupts the legal enforcement of anti-money laundering laws, asset seizure, and capital controls. It is very likely that a key disclosure case will make it to the U.S. Supreme Court where it is far from certain that the Fifth Amendment privilege, as it relates to a refusal to decrypt bitcoin assets, will be universally upheld.
Many observers have suggested defensive techniques that deploy TrueCrypt disk encryption with hidden volume partitions or PGP Whole Disk Encryption rendering the entire computer unbootable thereby making even file time and date stamps unavailable. Another legal strategy to complicate matters could be to split the passphrase with another person and claim that you are never in possession of the entire real passphrase. Then, at least there would be "plausible deniability" as to who provided the invalid portion of the passphrase or you would have a cellmate if held in contempt.
Follow author on Twitter.
|
38aa45bcf6de2d01db34d239de82f0a6 | https://www.forbes.com/sites/jonmatonis/2012/09/27/bitcoin-foundation-launches-to-drive-bitcoins-advancement/?sh=1928b3e1d868 | Bitcoin Foundation Launches To Drive Bitcoin's Advancement | Bitcoin Foundation Launches To Drive Bitcoin's Advancement
Several months in the making, the Bitcoin Foundation launches this week to accelerate the global growth of bitcoin through standardization, protection, and promotion of the open source protocol. As a nonprofit corporation and neutral forum for collaboration, the Bitcoin Foundation follows the successful model of open source bodies like the Linux Foundation and the Tor Project.
Bitcoin is a decentralized electronic cash system using digital signatures and cryptographic proof to enable irreversible payments between parties without relying on trust. Leveraging the breakthroughs of public-key cryptography, bitcoin also uses peer-to-peer networking to operate without a central authority whereby the new issuance and transaction verification functions are carried out collectively by the network. In the absence of a third-party processing intermediary, transactions are rapid and simple to send and receive with little to no fees.
As both a payments platform and a nonpolitical unit of account, Bitcoin has already seen astonishing growth in just over three years. Bitcoin's total base money supply is currently valued at $125 million. Number of transactions has gone from 219 in 2009 to 4,964,513 year-to-date in 2012. The value of bitcoins transferred per year has gone from 35 trillion BTC to 60,896 trillion BTC. And, the network hashing rate, which is a measure of computational speed or horsepower, has increased from 0.008 Giga hashes per second in December 2009 to 19,284 Giga hashes per second in September 2012, thereby making it the largest distributed computing project in the world today in terms of processing performance. (Source: State of the Coin 2012)
With a growth trajectory like that, it becomes even more imperative to standardize and manage the ongoing change process to the core software while simultaneously enhancing overall security and robustness. Since 2009, Bitcoin.org has served as the focal point for the collaborative open source development effort.
A lot will be changing as the foundation ramps up. The Bitcoin Foundation mission leads to the early specific goals of financially sponsoring the efforts of the core development team, funding core infrastructure such as a test network and a DNS seed node, publishing a set of best practices for bitcoin integration, coordinating responses to business and media inquiries, and organizing an annual bitcoin conference with the first one being held in Silicon Valley.
In addition to individual membership, the Bitcoin Foundation provides a way for corporate enterprises from all industries to participate in the expansion of the bitcoin network and platform. We see new bitcoin exchanges sprouting up on a daily basis and we see innovative bitcoin applications coming to market across all industry sectors. Magnificent for bitcoin, this worldwide adoption strengthens the credibility and value of the peer-to-peer network. A nonpolitical currency doesn't have a morality -- it is simply a process for value transfer.
The overriding intent that runs through all Bitcoin Foundation activity is that it be membership and community driven, including succession planning. Reflected in the governance structure, individual and industry corporate members will have voting rights consistent with Bitcoin Foundation Articles and Bylaws. Annual individual membership is 2.5 BTC with a 25.0 BTC lifetime option; corporate membership is 500 BTC for silver tier, 2,500 BTC for gold tier, and 10,000 BTC for platinum tier.
Donations in support of the Bitcoin Foundation can be made by going to the website.
Initial board members include Gavin Andresen, Mark Karpeles, Jon Matonis, Patrick Murck, Charlie Shrem, and Peter Vessenes.
Executive Director Vessenes proclaims, "My hope is that the Bitcoin Foundation will be the organization that focuses and unlocks all of your energy and talents towards promoting Bitcoins, protecting them, and increasing their legitimacy through standardization. Bitcoins truly are the Internet’s currency in my opinion and it’s so exciting to be a part of this disruptive and engaging technology!"
[Disclaimer: Author serves on the Foundation's Board of Directors as Secretary.]
Follow author on Twitter.
|
fa71b419a43e6b426566d74e8accba11 | https://www.forbes.com/sites/jonmatonis/2012/10/04/bitcoin-prevents-monetary-tyranny/ | Bitcoin Prevents Monetary Tyranny | Bitcoin Prevents Monetary Tyranny
Mel Gibson as William Wallace wearing woad. (Photo credit: Wikipedia)
Bitcoin is not about making rapid global transactions with little or no fee. Bitcoin is about preventing monetary tyranny. That is its raison d'être.
Monetary tyranny can take many ugly forms. It can be deliberate inflation, persecutory capital controls, prearranged defaults within the banking cartel, or even worse, blatant sovereign confiscation. Sadly, those threats are a potential in almost any jurisdiction in the world today. The United States does not have a monopoly on monetary repression and monetary tyranny.
Once the State is removed from the monetary sphere and loses the ability to define legal tender, its power becomes relegated to direct legislative and enforcement measures that do not immorally manipulate a currency. Taxes for wars and domestic misadventures will have to be raised the old-fashioned way -- that is to say government money cannot be raised by simply debasing the currency.
Just as the Second Amendment in the United States, at its core, remains the final right of a free people to prevent their ultimate political repression, a powerful instrument is needed to prevent a corresponding repression -- State monetary supremacy. That task has fallen to an unlikely open source project that is based on cryptography protocols and peer-to-peer distributed computing. As the mechanism for a decentralized, nonpolitical unit of account, the Bitcoin project uniquely facilitates this protection.
The timing of Bitcoin's appearance, and subsequent growth, is no accident either. If one follows the relevant sentiments and trends, it's evident that society was approaching a breaking point. Essentially, bitcoin is a reaction to three separate and ongoing developments: centralized monetary authority, diminishing financial privacy, and the entrenched legacy financial infrastructure. An alternative money provider that was centralized would probably not survive long in any jurisdiction. The emergence of Bitcoin was baked into the cake already.
We can see from the case against digital money provider e-gold that an efficient challenger to the provision of a stable monetary unit will not be permitted... really. In 1996, a humble oncologist named Doug Jackson bravely built an auditable and verifiable system of transferring ownership rights to gold and silver bullion in an online digital environment. Wired's Kim Zetter described it this way:
E-gold is a privately issued digital currency backed by real gold and silver stored in banks in Europe and Dubai. Jackson says about 1,000 new e-gold accounts are opened daily, and the system processes between 50,000 and 100,000 transactions a day. With a value independent of any national legal tender, the electronic cash has cultivated a libertarian image over the years, while drawing the ire of law enforcement agencies who frequently condemn it publicly as an anonymous, untraceable criminal haven, inaccessible to police scrutiny.
Where have we heard that before? Then in December 2005, the U.S. Federal Bureau of Investigation and Secret Service raided e-gold's Florida offices. Jackson tells Wired, "They basically raped our computers and also took us offline for 36 hours, took all the paper out of our office." Jackson says that the government also froze parent company Gold and Silver Reserve's U.S. bank account but the company survived, "only because its euro, pound and yen accounts are maintained outside the United States." The physical bullion assets were subsequently seized as well.
With the prosecution resting on a civil complaint charging Gold and Silver Reserve, Inc. with operating as an unlicensed money-transmitting business, Jackson finally acquiesced in July 2008 and plead guilty to conspiracy to commit money laundering (a victimless crime) and operation of an unlicensed money transmitting business rather than the alternative threat of 20 years in jail and a half million dollar fine.
Wired magazine, in June 2009, published this excellent account of the e-gold business in the wake of the federal investigation entitled "Bullion and Bandits: The Improbable Rise and Fall of E-Gold". Also included in the article is probably the most telling photo of all -- Doug Jackson sitting on the floor surrounded by file boxes labeled U.S. Secret Service.
Zetter writes, "At e-gold’s peak, the currency would be backed by 3.8 metric tons of gold, valued at more than $85 million." E-gold founder Doug Jackson wanted to solve the world's economic woes, "but instead got an electronic ankle bracelet for his trouble."
Recently, in 2009, Bernard von NotHaus was indicted on counterfeiting charges for manufacturing a private metallic coin that actually contained some precious metals. After 23 years of research and development plus 11 years of operating in the marketplace, Liberty Dollar suspended operations. Following the conviction and for the appeal, the prominent Gold Anti-Trust Action Committee filed an amicus curiae brief in support of acquittal and revolving around the question of whether anyone but the government has the right to issue money. Afterwards, many commentators pointed out the absurdity of penalizing honest money to strengthen the facade of manipulated money.
Further contributing to the disturbing trend against monetary freedom and financial privacy are initiatives like the Foreign Account Tax Compliance Act (FATCA), which has been written about many times on these pages and also in The New York Times. Other countries around the world would not even contemplate such a brazen endeavor that imposes a costly withholding and disclosure regime on sovereign foreign entities and financial assets. Furthermore, they see it as American arrogance and American hegemony run amok.
However, society will not be ready to fully embrace the promises of decentralized nonpolitical currency until it can come to terms with the fact that money in a free society should not be used for the purposes of identity and asset tracking. Banks and governments may be concerned with that goal, but it is not the role of our money.
Follow author on Twitter.
|
7a804dc10856a4b250c53ee849f611f7 | https://www.forbes.com/sites/jonmatonis/2012/11/03/ecb-roots-of-bitcoin-can-be-found-in-the-austrian-school-of-economics/?sh=35e97ab83b18 | ECB: "Roots Of Bitcoin Can Be Found In The Austrian School Of Economics" | ECB: "Roots Of Bitcoin Can Be Found In The Austrian School Of Economics"
The ECB (European Central Bank) has produced the first official central bank study of the decentralized cryptographic money known as bitcoin, Virtual Currency Schemes. Ignoring for a moment the ECB's condescending and derogatory use of the virtual currency phrase and scheme phrase, the study produced at least one landmark achievement.
In claiming that "The theoretical roots of Bitcoin can be found in the Austrian school of economics," the ECB forever linked Bitcoin to the proud economic heritage of Menger, Mises, and Hayek as well as to Austrian business cycle theory. This recognition is also a direct testament to the monetary theory work of Friedrich von Hayek who inspired many with his 1976 landmark publication of Denationalisation of Money.
Bitcoin fully embodies the spirit of denationalized money as it seeks no authority for its continued existence and it recognizes no political borders for its circulation. Indeed according to the report, proponents see Bitcoin as "a good starting point to end the monopoly central banks have in the issuance of money" and "inspired by the former gold standard."
Economists from the 19th and mid-20th centuries can be forgiven for not anticipating an interconnected digital realm like the Internet with its p2p distributed architecture, but modern economists cannot be. From their own conclusions (on page 48) which inaccurately lump Bitcoin together with Linden Dollars, here is what the modern-day economists at the ECB are still not getting:
1. ECB concludes that if money creation remains at a low level, bitcoin does not pose a risk to price stability. This is incorrect on two levels. One, the creation of new bitcoin is capped at 21 million with eight current decimal places so it grows through adoption and usage rather than monetary expansion. And two, as with gold, silver, and other commodities having a monetary component, price stability is a function of the market not central planners;
2. ECB concludes that bitcoin cannot jeopardize financial stability due to its low volume and limited connection with the real economy. Conversely, bitcoin will tend to increase financial stability and overall soundness. Bitcoin's connection with the real economy is only a concern for the regulated and taxed economy, whereas bitcoin independently may thrive in the $10 trillion shadow or "original" economy. Besides, with its repeated market interventions, no one has done more to jeopardize financial stability than the ECB itself;
3. ECB concludes that bitcoin is currently not regulated and supervised by any public authority. It would be more accurate to say that State-sponsored regulation is largely irrelevant because of the inherent design properties of a peer-to-peer distributed computing system. But happily, this is still a conclusion that I can agree with and recommend that it remains the case;
4. ECB concludes that bitcoin could represent a challenge for public authorities, given the legal uncertainty and potential for performing illegal activities. While public authorities will certainly be challenged by the introduction of a monetary unit that cannot be manipulated for political purposes, bitcoin in some cases does have the ability to provide tracking capability that far exceeds that of national cash or money substitutes. What authorities will find most troubling though, with bitcoin, is that money flows between individuals and businesses will no longer be exploitable for purposes of unlimited identity tracking and unconstitutional 'fishing expeditions';
5. ECB concludes that bitcoin "could have a negative impact on the reputation of central banks, assuming the use of such systems grows considerably and in the event that an incident attracts press coverage, since the public may perceive the incident as being caused, in part, by a central bank not doing its job properly." Pretentious as it may seem, the ECB is stating here that central banks as protector of the general public with respect to payments have a role to play because it is their reputation that suffers in the event of a bitcoin-related security incident. Firstly, that is an assumed responsibility -- not a delegated responsibility; and reputational impact aside, I would prefer to rely on lex mercatoria;
6. ECB concludes that bitcoin does indeed fall within central banks' responsibility as a result of characteristics shared with payment systems. Of course it does not. Central banks are a form of centralized economic planning so their stated responsibilities are suspect from the outset. Bitcoin represents an intangible math puzzle whose existence is solely restricted to transfer rights on a cloud-based public ledger. It more closely resembles an air guitar than a payment system for purposes of oversight.
Now, in affirming the superior attributes of bitcoin in the role of financial innovation, the ECB correctly identifies why the profligate issuers of national fiat currencies will ultimately feel threatened by such a decentralized nonpolitical unit. The report acknowledges the following with respect to bitcoin: (a) "higher degree of anonymity compared to other electronic payment instruments," (b) "lower transaction costs compared with traditional payment systems, and (c) "more direct and faster clearing and settlement of transactions" from the absence of intermediaries.
Overall, the fear of the monetary overlords is palpable as the study concludes by basically promising continued scrutiny and oversight. Also forecast for the plebeians is a possible remedy to the global scope and unclear jurisdiction of the regulatory challenge:
"One possible way to overcome this situation and obtain some quantitative information on the magnitude of the funds moved through these virtual currency schemes could be to focus on the link between the virtual economy and the real economy, i.e. the transfer of money from the banking environment to the virtual environment. Virtual accounts need to be funded either via credit transfer, payment card or PayPal and therefore a possibility would be to request this information from credit institutions, card schemes and PayPal."
However, Michael Parsons, a former executive with Emirates Bank (Dubai), Moscow Narodny Bank, and KPMG Moscow, believes that those efforts will prove futile and he explains, "Bitcoin is 'regulated' by its peers and mathematics. And Bitcoin is not a currency like fiat money. It is a value transfer system which is given value only by its users. So the ECB, FED, etc. have no mandate to control a 'virtual currency' just because they call it (bitcoin) that! It will just go underground. Bitcoin is like Light and Air. Free to use and transfer. Owned and issued by the people and NOT the State!"
It evokes an image of central bankers huddled comfortably on the safe shoreline as they look out into the horizon and see the dangerous, unstable virtual currencies approaching. The opposite is actually the truth because it is the central bankers who are floating precipitously out at sea. As James Turk famously said about bitcoin's analog cousin, "When standing in a boat and looking at the shore, it is the boat (currencies) – and not the land (gold) – that is bobbing up and down."
Follow author on Twitter.
|
a6ad8c1532ce5e20db47b7e8c6f30776 | https://www.forbes.com/sites/jonmatonis/2012/11/07/department-of-homeland-security-to-scan-payment-cards-at-borders-and-airports/ | Department Of Homeland Security To Scan Payment Cards At Borders And Airports | Department Of Homeland Security To Scan Payment Cards At Borders And Airports
Typical wireless electronic card reader (Photo credit: USDA)
Travelers leaving or entering the United States have long had to declare aggregated cash and other monetary instruments exceeding $10,000. Now, under a proposed amendment to the Bank Secrecy Act, FinCEN (Financial Crimes Enforcement Network) will also require travelers to declare the value of prepaid cards that they are carrying, known now as "tangible prepaid access devices."
Expected to be finalized by the end of this year, the cross-border reporting modifications stem from a broader October 2011 definition of payment methods and form factors that replaced the term "stored value" with the term "prepaid access" in an effort to more accurately describe the process of accessing funds held by a payment provider.
Enforceability falls to U.S. Immigration and Customs Enforcement and U.S. Customs and Border Protection both within the Department of Homeland Security, which is already developing advanced handheld card readers that can ascertain whether a traveler is carrying a credit card, debit card, or prepaid card. This differentiation is important because only prepaid card balances will need to be added to declaration report forms.
Acknowledging that many questions still remain and that enforcement may not be straightforward, Cynthia Merritt, assistant director of the Retail Payments Risk Forum at the Federal Reserve Bank of Atlanta, had this to say about the handheld readers:
Furthermore, according to the comments, the enforcement challenge is not new, nor is the concept of a device or document that can be used to access value. The current challenges are similar to those presented in the past with other monetary instruments such as checks, money orders, and traveler checks.
Merritt also stated that, "When law enforcement takes possession of a cash or monetary instrument at the border, they are effectively holding the funds, but not so with a prepaid card or other device. Holding the card does not provide access to the underlying funds."
Other questions to be settled include how to determine mobile phone wallet and key fob balances that can function in a manner similar to card swiping, how to distinguish between reloadable and non-reloadable prepaid cards, how to distinguish between bank-issued and non-bank-issued prepaid cards, should closed loop gift cards be included in the cross-border reporting requirements, what to do about cards that clear customs with a minimal balance but are then subsequently reloaded with an amount in violation of the reportable limits, and what to do about a large number of nonpersonalized, unembossed cards.
Also, would a traveler have legal recourse for damages if agents seized a proper debit card in the mistaken belief that it was a reportable prepaid card?
These complications and others imply that FinCEN's NPRM [Notice of Proposed Rule Making] may yet undergo some revisions in order to bring the regulations in sync with the realities of the prepaid card industry.
In the meantime, travelers with a memorized Bitcoin private key can breathe a sigh of relief, because according to an important April 9th, 2012 letter to FinCEN Director James Freis from Homeland Security Investigations it appears that intangible brainwallets are safe for the moment:
Should the border declaration apply to codes, passwords and other intangibles as well as to any tangible object that is dedicated to accessing prepaid funds? HSI believes that border declaration should not apply to codes, passwords and other intangibles. Identification and verification of intangibles in the context of border enforcement poses logistical and potential legal issues that are not contemplated by currency and monetary instrument declaration regulations. The structure of the currency and monetary instruments declaration regime, hinges on the existence of a physical object. The language requires something that can be passed from one individual to another in order to be presented to a third party for execution/payment.
Follow author on Twitter.
|
00ec9cbbe006912dea2430edc497c8e8 | https://www.forbes.com/sites/jonmatonis/2012/11/13/the-general-the-biographer-and-unencrypted-email/ | The General, The Biographer, And Unencrypted Email | The General, The Biographer, And Unencrypted Email
The newest poster couple for encrypted email is General David Petraeus and his 'embedded' biographer Paula Broadwell. One of the more curious aspects of this episode is why the nation's spy chief couldn't figure out the basics around email cryptography or why a West Point graduate and lieutenant colonel in the U.S. Army Reserves who also worked with the FBI Joint Terrorism Task Force wasn't aware of Tor for IP masking.
It all started with the trace of an apparently anonymous email sent by Broadwell to Petraeus' friend Jill Kelley that was traced back to Broadwell's hotel room at the time via email location metadata. This email was the original message that led to the eventual discovery of the sexually explicit emails between Petraeus and Broadwell.
Obviously for readability reasons this original message could not have been encrypted (also Tor does not provide encryption), but it could have been anonymized as to location and that is precisely what Tor was designed for. Originally a U.S. Navy project for shielding location data and defending against traffic analysis, the Tor Project utilizes a layered router protocol which obfuscates the sender's IP location. Even a rudimentary VPN (Virtual Private Network) that religiously deleted IP log files and accepted anonymous payments would have been sufficient. Oh well...live and learn.
Beyond that, everyone seems to be asking the obvious question about email encryption, especially in today's surveillance state. If they can do this to each other, what are they doing to us? But let's examine how email encryption might have been used under these circumstances and if it would have proven effective.
Assuming that the connection between Petraeus and Broadwell would still have been discovered, what other precautions could they have taken besides the old terrorist trick of sharing a draft version that each party separately logged into?
For starters, the couple could have used stress-tested PGP (Pretty Good Privacy) for point-to-point encrypted email which involves installing a separate piece of client software and the exchanging of public keys. Also, they could have used a simpler web-based OpenPGP-compliant service such as Hushmail, which would have at least protected their historical retained messages provided neither one of them logged on again and made their password vulnerable to a court-ordered java applet spoofing.
So then, are the involved parties safe from anyone discovering the contents of their encrypted messages? Would the investigation have stopped at the discovery of Paula Broadwell as the anonymous email sender? I'm afraid it isn't as simple as that. Many factors are at play here dealing with individual or third-party data retention policies sometimes beyond your control as well as continued usage of the same private encryption key and password.
Federal agents have many tools in their arsenal, some legal and some not-so-legal. If IP location details were not protected, the linkage could have been established between Petraeus and Broadwell proving at a minimum the existence of some encrypted correspondence. The question is whether or not additional investigative actions beyond that would be warranted, or even approved. At that point, it's all about the strength of the password and obtaining it either through password cracking or password observation if the password is sufficiently strong.
When law enforcement has the advantage of tracking someone without their knowledge, software and hardware keyloggers can be an effective method to obtain password credentials. Keyloggers come in many forms but they are typically installed between the keyboard and the computer to capture and record a computer user's keystrokes, including passwords. Hardware keyloggers have an advantage over software keyloggers as they can begin logging from the moment a computer is turned on.
Alternatively, an ultra-small camera can be mounted above usual computer locations, such as an office desk or table. A wireless camera would be able to relay the images of the user typing a password thereby eliminating the necessity of physical re-entry.
Failing that, and failing waterboarding of the suspects, contempt of court charges could be invoked by the government since there is no specific law regarding key disclosure in the United States. One of the parties would first have to be charged with a criminal offense before the government can demand that they surrender their private encryption keys. Relevant case law has revolved around the Fifth Amendment privilege against self-incrimination.
Ironically, the global encrypted communication service Silent Circle just launched last month targeting government and corporate enterprise customers. It was founded by a world-renowned cryptographer and a former U.S. Navy SEAL sniper and communications security expert. I suspect this whole sordid story will made an excellent advertisement for them.
Follow author on Twitter.
|
3778493cbee57d5717fdadb5e1c75e60 | https://www.forbes.com/sites/jonmatonis/2012/12/09/bitcoins-greatness-not-realized-by-succumbing-to-regulation/ | Bitcoin's Greatness Not Realized By Succumbing To Regulation | Bitcoin's Greatness Not Realized By Succumbing To Regulation
Inaugural Issue of Bitcoin Magazine
Last Thursday's news that French company Paymium and their exchange division, Bitcoin-Central, partnered with a licensed and regulated Payment Services Provider (PSP) ignited a heated debate within the bitcoin community. Eventually, Bitcoin-Central tempered their overly-enthusiastic initial announcement.
"It feels like these French dudes are bringing saltpeter to a rave," declared Daniel Stuckey, a writer at Motherboard ridiculing the company for dismissing the founding concepts of bitcoin.
Not singling out the Paymium effort, there is a powerful undercurrent rejecting the notion that bitcoin exchange companies should seek approval to operate within the existing regulatory framework at all. That undercurrent has some validity. That is if larger forces at work don't settle the issue before then. However, it is the jurisdictions that they elect to operate within plus the specific exchange types that determine the level of required compliance. Legal counsel willing to challenge the status quo is sorely needed for the days ahead.
Floating-rate, rather than fixed-rate, exchanges are going to require the holding of customer funds in national currencies. Exchanges for actual delivery, rather than cash-settled futures exchanges quoted only in bitcoin, will also require holding customer funds in national currencies. Customers with large balances simply aren't going to use exchanges that don't identify their legal jurisdiction, delineate funds, and adhere to some type of recourse for insolvency and stolen funds. So, certain jurisdictions and their financial regulators tend to get involved. This is also the case with Mt.Gox being based in Japan.
Here's the real issue -- regulation in this context is only a bad thing if it leads to crony capitalism or if it suggests that "still-in-beta cryptographic play money" bitcoin requires regulation similar to a national political currency.
While an individual's bitcoin transactions may still be semi-private, the auditable address links on the block chain and identity requirements for entering or exiting the exchange will remove any doubt as to how much bitcoin was spent or earned. Also, the case can be made that, despite bitcoin's basis in mathematics and being devoid of ideology, graph theory analysis of the block chain can be significantly improved by having more 'regulated' data points thus cumulatively degrading the privacy of all bitcoin transactions. Bitcoin address logs for a bitcoin exchange are like IP logs for a VPN.
Yes, debit cards with a bitcoin logo are cool and they can facilitate easy movement of funds associated with bitcoin balances. But legacy debit cards are institutionalized vehicles of identity and they promote half-way measures. Any role for current financial institutions in the societal wealth transfer to cryptocurrency will come from embracing bitcoin on its terms. If banks want to participate in a meaningful way, they will have to adapt to Tor exit nodes, coin mixing services, escrow provisioning without identity, and underwriting private insurance on balances.
Bitcoin's great promise lies in its potential ability for both income and consumption anonymity. It is this feature alone that allows users to maintain the same financial privacy as physical cash today and it is this feature that will also lead to liberating advancements such as a thriving and interconnected System D, unhampered and undiluted freedom of speech, and superior asset management that can truly be said to be off-the-grid.
Those who support the antithetical overlay of bitcoin on the current financial system ensure us that it will only be temporary and that we must build bridges. That would be nice but it's a fairy tale. It reminds me of the Marxist theory of historical materialism and the Marx-Engels ideology that if we only tolerate the bourgeois state during the transitional advancement to a higher phase, we will see the complete "withering away of the state."
True revolutionary transformations just don't evolve that way. Linux didn't first co-exist within the Microsoft DOS and Windows environment and then decide to spin-off into a competing operating system. File sharing under the BitTorrent protocol didn't conduct a Hollywood outreach program and explain what the technology would mean for the film and recording studios.
One doesn't request freedom, one claims freedom. As Bitcoin Forum member btcbug stated about bitcoin's acquiescence to legality, "It's kind of like a bunch of slaves breaking out and then running straight back because they were so brainwashed they didn't even recognize freedom." However, the sad reality is that most of the slaves don't really want to be free which is exemplified by voting for ever-increasing State services that have to be funded through confiscatory levels of taxation and inevitably that means diminishing financial privacy.
Get real people! This is about more than just "agreeing to disagree" when it comes to stricter regulation being a good thing. Bitcoin without user-defined anonymous transactions is a neutered bitcoin. Paper cash comes with more financial privacy. In circular logic fashion, the pro-regulation adherents must then answer to their success, "what have we really accomplished?"
Follow author on Twitter.
|
a679f7eec95880fbb3b65626cdf56e14 | https://www.forbes.com/sites/jonmatonis/2012/12/23/fear-not-deflation/?sh=52f8a7931070 | Fear Not Deflation | Fear Not Deflation
We should not be afraid of deflation. We should love it as much as our liberties. --Jörg Guido Hülsmann
Deflation in the context of bitcoin has been cited frequently in the popular press as a detriment to its widespread adoption. For example, famed Keynesian economist Paul Krugman ridiculed the bitcoin cryptocurrency saying "it reinforces the case against anything like a new gold standard – because it shows just how vulnerable such a standard would be to money-hoarding, deflation, and depression."
Krugman could not be more wrong. Deflation is not a problem in the traditional monetary system and it will not be a problem in the bitcoin economy.
As over 99% of bitcoin's possible 21 million coins will be mined by 2031, the fixed mild inflation will effectively cease and a period of non-inflation will commence. Although the supply of cryptographic money will be relatively static with the exception of attrition through permanently lost coins, I will refer to the monetary phenomenon as deflation because as bitcoin's asset value increases compared to political numéraires, its effect on price expression will be seen as deflationary.
Contrary to the central banking and political class insistence that deflation must be prevented at all costs, an economy with a monetary unit that increases in value over time provides significant economic benefits such as near zero interest rates and increasing demand through lower prices. Let's look at some remarks from leading thinkers on deflation.
In responding to an article in The Economist, Doug Casey points out that "in a free-market economy, without central banks and without fractional reserve banking, both inflation and deflation as chronic events are really not possible." Casey states:
Deflation is actually a good thing, because in a deflation prices drop and money becomes more valuable, so deflation encourages people to save money. Deflation rewards the prudent saver and punishes the profligate borrower. The way a society, like an individual, becomes wealthy is by producing more than it consumes. In other words, by saving, not borrowing. And during a deflation, when money becomes more valuable, everybody wants money. They want to save. Whereas during an inflation, you want to get rid of the money. You want to consume. You want to spend. But you don’t become wealthy by spending and consuming; you become wealthy by producing and saving.
Jörg Guido Hülsmann is a German economist and author of Deflation and Liberty, an important monograph that demolishes the myth of monetary intervention to prevent the cleansing effects of deflation. Hülsmann writes:
Deflation is not inherently bad, and that it is therefore far from being obvious that a wise monetary policy should seek to prevent it, or dampen its effects, at any price. Deflation creates a great number of losers, and many of these losers are perfectly innocent people who have just not been wise enough to anticipate the event. But deflation also creates many winners, and it also punishes many 'political entrepreneurs' who had thrived on their intimate connections to those who control the production of fiat money. Deflation puts a break--at the very least a temporary break--on the further concentration and consolidation of power in the hands of the federal government and in particular in the executive branch. It dampens the growth of the welfare state, if it does not lead to its outright implosion. In short, deflation is at least potentially a great liberating force. It not only brings the inflated monetary system back to rock bottom, it brings the entire society back in touch with the real world, because it destroys the economic basis of the social engineers, spin doctors, and brain washers.
Former president of the Mises Institute, Doug French, writes in the essay In Defense of Deflation:
Lower prices increase demand; they do not reduce or delay it. That's why more and more people own flat-screen TVs, cellular telephones, and laptop computers: the prices of these goods have fallen, and people with lower incomes can afford them. And there are more low-income people than high-income people.
In A Plea for (Mild) Deflation, George Selgin rightly distinguishes between malign demand-driven deflation which is an unfortunate secondary effect of a central bank-manipulated, inflationary malinvestment phase and benign deflation which is the result of an increase in productivity:
The Great Depression dealt a near-fatal blow to such common-sense thinking about prices and the price level. A new generation of economists became so obsessed with avoiding the bad kind of deflation that they all but forgot about the good kind. Followers of Keynes advocated inflationary policies, which have been the norm ever since. Having paid penance for the Great Depression by suffering through six decades of inflation, it is time for us to revive old-fashioned logic concerning the potential benefits of deflation. Recognition of the possibility of benign deflation should have a salutary effect on the thinking of the world’s central bankers. By helping them to overcome their fear of falling prices, it will encourage them to deal a deathblow to the worldwide scourge of inflation. But that is only the beginning. Once the possibility of benign deflation is fully appreciated, zero inflation itself will come to be recognized as an overly expansionary policy—that is, as a mere steppingstone on the way to something even better.
Fear not deflation. Ultimately, the market will reach an equilibrium between investment and savings because in the absence of an equilibrium the benefits of a savings-only strategy would evaporate. Proper economic growth through sound investments will lead to a productivity-driven deflation.
Follow author on Twitter.
|
0ecc899684d4971ec9cea59b8fc1552c | https://www.forbes.com/sites/jonmatonis/2013/02/24/the-cashless-utopia-mirage/?sh=4e08b7085202 | The Cashless Utopia Mirage | The Cashless Utopia Mirage
Zero Privacy (photo by Wagner Machado Carlos Lemes)
David Wolman's article The Anonymity Fantasy gets off on the wrong foot by claiming to know what we all "deserve" or "what we all want." As a reader, this is aggravating on multiple levels, but the pretentious fun doesn't stop there as we later learn that anonymous cash does not equal freedom and that "clinging to cash" is misguided.
I could be cynical here, but I really don't think it's about perfidiously advancing a thesis to promote his new book. I think David actually believes all of this despite what history teaches us.
Let's not kid ourselves, because the end of money, as we know it, really means the beginning of the transactional surveillance State, which makes this a serious debate about the boundaries of State power and the dignity of an individual.
Unfortunately, the real world extends beyond Wolman's polite corner of Oregon.
There are activists and dissidents in hostile regions paying for Internet blogs, food supplies, and safe harbor. There are payments being made to border guards on a daily basis to flee a murderous government somewhere. There are women selling baskets and blankets at street markets to feed their hungry families. There are cancer patients buying weed from a friend if their state doesn't accommodate medical marijuana. And even before and after the Third Reich, persecuted peoples have always needed a way to protect and transfer what little remained of their wealth.
The persistent war on cash has more to do with moralistic society than it does with civil society as Wolman claims. With ultimate tracking capabilities, how does Wolman decide when a government's "right" becomes a wrong? Does he defend the victimless crime laws against online gambling and consensual sex for money between adults? Does he defend confiscation of private sector wealth when a socialistic regime runs out of funds? Does he defend an orchestrated payments blockade against whistleblower site Wikileaks? Does he defend brutal government law enforcement measures in Syria and Gaddafi's Libya?
Anonymity and civil society do mix --- it is omnipotent violent government and civil society that do not mix.
Wolman is thinking like a technologist when he promotes the cashless utopia and, as a technologist, he's probably correct because paper cash is inefficient, problematic, and dirty. But it's mostly inefficient and problematic for the overzealous regulators and tax collecting apparatus.
Efficiency happens to be a very short-sighted and unintellectual argument. Selective breeding for certain 'preferred' traits is a vastly more efficient method and so is the training-from-birth selection criteria employed by totalitarian states that place athletes in the modern Olympics. I doubt Wolman would want to live in those efficient societies --- cashless or not.
Also, it's a good thing that Wolman partially credits consultant David Birch with his un-semantic argument about the differences between anonymity and privacy, because that way he doesn't have to shoulder the sole blame for such an untenable supposition.
Privacy, especially user-defined privacy, sits on a sliding scale that is defined by the individual. One person's idea of privacy may be anonymity from all and another person's idea of sufficient privacy may be privacy from aggressive marketing companies and governments but perhaps not from banks. The point being that it is the prerogative of the individual, not book authors or digital money consultants, to determine where one sits on that personal sliding scale.
Cash is not the enemy of the poor. Nor are the poor hurt by anonymity --- they are the ones who desire it the most. If that were not the case, we would see the informal, unlicensed economy shrinking rather than expanding. It's only the global repressionists who cannot accept human nature without moralizing that promote the end of anonymous cash.
As Web anthropologist Stowe Boyd proclaims, anonymous cash equals freedom and we should rejoice in that.
Follow author on Twitter.
|
7a9781619b35358da4f351267141b511 | https://www.forbes.com/sites/jonmatonis/2013/03/24/cyprus-goes-cashless-the-hard-way/ | Cyprus Goes Cashless The Hard Way | Cyprus Goes Cashless The Hard Way
Anxious Cypriots queue up for their cash.
The rolling crisis in Cyprus should reach a crescendo this week. If the parliament votes yes on some type of deposit confiscation, it would mean the people of Cyprus have elected to go "all in" on the euro and link their fate with the fate of the single currency.
When given a clear opportunity to leave the eurozone, Cypriots will probably decide to stay rather than rebuild their banking infrastructure from scratch.
With the latest maneuverings and after going full-circle with a range of alternatives, it appears that European bailout terms could be met by a 20-25% levy on deposits only in excess of the guaranteed 100,000 euros. Cypriot Finance Minister Michael Sarris said yesterday that a deposit levy was back on the table as a way to come up with the 5.8 billion euros needed by the new March 25th deadline for the larger rescue amount to be approved.
Writing for SkyNews, Ed Conway described the raid on bank deposits as "a step across the financial Rubicon." Indeed, it has ramped up expectations as to what is now within the range of options for other EMU member states. Politely calling it a levy or deposit tax doesn't alter the fact that it still amounts to brazen theft. Others have called it legalized bank robbery.
The Statist quote of the day goes to Naomi Fowler, Taxcast producer for Tax Justice Network, who said, "I think Cyprus is a cautionary tale to citizens that their government's adventures in tax havenry will cost them dearly." She advocates for global penalties against the provision of financial privacy which to this observer warps the very meaning of the word justice.
"Only put money in the banking system that you can afford to lose," advises financial commentator Max Keiser. This is no more true than last weekend in Cyprus when bank depositors had electronic transfers blocked and were initially told to prepare for a confiscatory levy of up to 9.9% of their deposit balances across the board. The government then ordered banks to keep ATMs stocked since cash and credit cards were the only remaining methods of transacting. However, it is not clear how much longer the credit cards will be functioning in the country.
With banks in Cyprus now scheduled to re-open on March 26th after eight days of consecutive closure, this would make the Cypriot banking-system shutdown tied with the U.S. (March 6-13th, 1933) for longest number of shutdown days, following only first place Argentina (April 20-29th, 2002) at 10 days. Should the crisis extend beyond eight days and the European Central Bank pulls the liquidity it has been pumping into Cypriot banks, the ATMs may become cashless.
"The future of the euro zone has been put on the line for a few billion euros. Yet, the assumption of Cyprus not being a systemic risk rests on a single expectation: that it stays in the euro zone," according to Stephen Fidler at The Wall Street Journal. "Should it exit, all bets are off." However, other countries should be sufficiently discouraged from taking that route if they see live television images of a full-blown banking panic and an economic collapse within an EU member nation.
For now, the general attitude among the troika appears to be let's experiment with Cyprus and if things go badly, it's not such a long-term big deal because Cyprus is too small to matter. That could prove to be an optimistic fantasy.
With capital controls to prevent a mass exodus from Cyprus banks now fait accompli regardless of the bailout decision, Jeremy Warner at the Telegraph says that it is the end of the single currency in all but name:
Yet the point is that if capital controls are introduced, it basically makes Cypriot euros into a national currency, rather than part of wider monetary union. The capital controls will severely limit your ability to get your euros out of Cyprus, rending them essentially worthless in the wider eurozone. It would be a bit like telling Scots they can't spend their UK pounds in England. Monetary union is many things, but above all it is about free movement of money and a uniform value wherever it is spent. When these functions are disabled, then you cease to be part of a single currency.
The era of free capital movement is behind us. Capital controls are about government keeping your money within easy reach should they ever want it. A decentralized and nonpolitical currency like Bitcoin starts to look attractive by providing a safer destination for wary depositors, allowing them to store their money securely in a digital account on their own computers, away from the big governments and politicians' reach. It is possible to purchase bitcoins in Cyprus at LocalBitcoins.com.
"Money flows to where confidence exists" says Alan Safahi, CEO of ZipZap, Inc., a global cash network with over 700,000 locations in the world. "As bitcoin gains more acceptance, consumer confidence increases and more money will flow to bitcoins, causing a continuous rise in price due to limited supply which then increases consumer confidence even more," adds Safahi.
As this trend continues, ZipZap, which processes more purchases of bitcoin than any other cash network stands to benefit tremendously from this trend. "The growth opportunity is not limited to Europe. We are seeing a significant increase in volume in the past few days from consumers in the U.S.," says Safahi.
The emerging trend towards bitcoin as a flight to safety seems to be accelerating despite the recent regulatory guidance from FinCEN (Financial Crimes Enforcement Network). As part of the Treasury Department, FinCEN's guidance on enforcement would extend traditional money laundering rules to most types of virtual currencies, including bitcoin. Although bitcoin proponents emphasize that a test case has not emerged yet.
"It’s almost a badge of respect when the Treasury starts regulating you," said James Rickards, author of Currency Wars. "You must be doing something right." "Gold is a great way to preserve wealth, but it is hard to move around," added Rickards. "You do need some kind of alternative and Bitcoin fits the bill. I'm not surprised to see that happening." Follow author on Twitter.
|
250db5012f449add8e39adce251f1ce8 | https://www.forbes.com/sites/jonmatonis/2013/04/18/the-fiat-emperor-has-no-clothes/?sh=6f66725e6e9e | The Fiat Emperor Has No Clothes | The Fiat Emperor Has No Clothes
Paul Krugman (Image by DonkeyHotey)
A piece from Paul Krugman in The New York Times this week criticizes bitcoin for being antisocial and for not having a State-controlled supply while secretly admiring its powerful abstractness.
As a complicit minion in the State's appropriation of the monetary unit, Krugman perpetuates 'The State Theory of Money' myth that the sovereign's power to collect taxes and declare legal tender imbues a currency with ultimate value.
While that may be a reason to acquire a certain amount of government fiat currency, it is a transitory value because in the end it is still based on a State-sanctioned illusion. Anyone who has visited a weekend flea market has noticed the old coin and currency collector displays filled with past experiments in national fiat money. Those paper notes were at one time valued for something too.
We don't want a pristine monetary standard untouched by human frailty as Krugman claims. We want freedom in the monetary standard untouched by the politicizing process.
In a Krugman world, centralized management of the money supply is preferable to a market-based outcome because the academically-informed economists will serve the best interests of the economy at large. However, our monetary overlords possess no special knowledge or secret sauce that justifies dictatorial control over money any more than it would justify dictatorial control over the market for something like soda beverages or dog food. Trust in mathematics trumps trust in central bankers.
The question of political control over a monetary system is the greatest litmus test for discovering those that seek control over others. Usually, it will be cloaked in terms like full employment, price stability, temporary stimulus, quantitative easing, and economic growth, but manipulation of the money supply serves only to favor the issuers of that particular monetary unit.
Money has a lot in common with religion. At some level, it requires a huge leap of faith. Yes, a belief in gold requires this too as the non-monetary value assigned to gold is probably no more than 5% of its market price. However, this is also what makes bitcoin the ultimate social money because for its value it merely requires others, not the law. Money is already the most viral thing on the planet and the network effect exponentially reinforces that.
Krugman actually struggles to assert that bitcoin is antisocial because he cites economist Paul Samuelson who once declared that money is a "social contrivance," not something that stands outside society. Samuelson is absolutely correct on that point and bitcoin stands firmly within society. It is no one's right to question why some place value on bitcoin and some do not since all value is subjective. The rationale for assigning value to bitcoin is as varied as the human fabric itself.
In this context, society can be defined as those mutual users willing to agree to a medium of exchange and a store of value. Since bitcoin, just as the Internet, recognizes no political boundaries, Krugman resists seeing the global monetary unit as something social. Krugman sees society only as a multitude of aggregated fiefdoms where he is the emperor's cherished tailor.
Though, just like the untainted child in the Hans Christian Andersen fairy tale, some of us are beginning to notice. It's not the illusion itself that so offends our sensibilities, but more the notion that a competitive illusion is not to be permitted. If a free market illusion voluntarily agreed to from the bottom up is so desperately feared, then the protectors of the State-sanctioned illusion must not have the most benevolent of motives in store for us plebeians.
I don't know about you, but I for one can stand up and exclaim: "the fiat emperor has no clothes!" What if more of us did?
Follow author on Twitter.
|
92f82f235a90227d6dc0ba456a595f61 | https://www.forbes.com/sites/jonmatonis/2013/06/05/the-politics-of-bitcoin-mixing-services/ | The Politics Of Bitcoin Mixing Services | The Politics Of Bitcoin Mixing Services
As the cryptocurrency arms race escalates beyond identity verification at exchange endpoints, mixing services for bitcoin may emerge as the next frontier in the battle for financial privacy.
If bitcoin exchange regulation becomes so effective that exchange operators are required to link specific bitcoin addresses to individual customers, then users may have few remaining choices should they want to maintain transactional privacy. Call it the law of unintended consequences for overarching bitcoin exchange regulation.
Two facets of the growing political debate on anonymizing services are the traditional centralized bitcoin mixers and the newer decentralized bitcoin mixers that require a modification to the Bitcoin protocol.
With traditional bitcoin mixers, the process could become highly-charged politically and the regulatory status of mixing services called into question. Reliable legal jurisdictions for operating bitcoin mixing services would therefore gain prominence since it reasonably could be viewed as a protected free speech issue. Potentially, Iceland could serve as a bitcoin mixing haven.
The emergence of services that mingle bitcoin for the purpose of returning bitcoin not associated with the original input address has had a somewhat spotty history. Also called bitcoin laundries, these web-based services charge bitcoin holders a nominal fee to receive different bitcoins than the ones initially transferred. The sites never handle national currencies like the dollar or euro so technically they are not exchanges. Also, the administrator of the service has to be trusted to delete any archival logs and not to run off with the coins.
The largest such service operating today is the Blockchain.info mixing service which has a maximum transaction size of 250 bitcoins and a 0.5% transaction fee. Transaction logs are removed after eight hours and customers can use the taint analysis tool to verify that coins were properly mixed. Other services include BitLaundry and The Bitcoin Laundry operated by Mike Gogulski.
Advances on the decentralized mixer front were highlighted in Olivier Coutu's largely theoretical presentation at the Bitcoin Conference in San Jose. Although it resolves the trusted intermediary vulnerability, the political debate with decentralized mixers revolves around convincing bitcoin core developers that it is essential functionality or creating a different bitcoin client altogether. Either development approach would subsequently require majority support from the bitcoin mining community.
Zerocoin from Johns Hopkins University is a method whereby the trusted intermediary for mixing can be eliminated. The software is already written and soon to be released as open source code. However, it requires modifications to the core Bitcoin protocol and adoption by the majority of bitcoin miners. With the current political climate tilting towards full disclosure for bitcoin transactions, at least at the exchange level, it is unlikely that Bitcoin core developers would elevate bitcoin privacy to an "all-hands-on-deck" emergency priority. Yes, open source projects are comprised of political animals as well.
According to Johns Hopkins University cryptography professor Matthew Green, Zerocoin researchers are examining voluntary compliance options that reduce but don't eliminate your transaction privacy, such as accountability limits on dollar amounts of anonymous transactions. This type of alternate approach to Zerocoin adoption would be possible without support of the Bitcoin client software. However, not integrating Zerocoin into the Bitcoin protocol would require third-party services to act as issuers of its anonymizing tokens with trust problems similar to the centralized laundry services.
Also, in-person exchange LocalBitcoins.com could act as a pure person-to-person mixing service for bitcoin users that meet in designated places like cafés. Personal mixing has the additional benefit of introducing plausible deniability into the entire bitcoin ecosystem because the coins cease becoming provably yours at that point. After seeing the LocalBitcoins selling-for-cash section in the U.S., Carol Van Cleef, a partner in Patton Boggs' banking practice and adviser on anti-money laundering policies, ominously warned, "You better get yourself registered, or you better get your name off the list real fast."
Vitalik Buterin of Bitcoin Magazine argues that Bitcoin is not losing its soul through regulation and that the core principles of the bitcoin protocol, such as user-defined anonymity and user-defined transactional privacy, remain intact due to optional mixing services. This is a critical point because, when it comes to bitcoin oversight, regulators and law enforcement must comprehend that which can be constrained versus that which cannot be constrained.
Otherwise, legislators and government officials risk inadvertently steering Bitcoin advancements in the direction of even more liberating decentralized architectures. Remember, it was the forceful and horrific crackdown on casual file sharers that provided the impetus for the remarkable BitTorrent technology.
One can only defer the bitcoin privacy issue for so long. At some point, Bitcoin core developers, mining operators, lobbyists, and industry thought leaders have to take a principled position and decide on what side of history they wish to stand.
Follow author on Twitter.
|
0f36f023e0d23a995bcd1668efca6aec | https://www.forbes.com/sites/jonobacon/2015/08/19/a-mythbuster-and-more-discuss-the-rise-of-the-makers/ | A Mythbuster And More Discuss The Rise Of The Makers | A Mythbuster And More Discuss The Rise Of The Makers
The last 15 years have seen makers go from the garage to prime-time. I sat down with Jamie Hyneman from Mythbusters and Dale Dougherty from Make Magazine to chart this incredible revolution.
On November 15, 2001, Microsoft released their new video game platform, the XBOX. What everyone thought would simply be a bold new step in the videogame realm actually planted a seed that would flourish into a new era of homebrew innovation.
What made the XBOX interesting was that under the covers, it was just a PC as opposed to a custom-engineered computer. The relatively familiar nature of the architecture meant that one could understand how it worked and twist it into something it wasn’t intended to be. That person was Andrew “Bunnie” Huang.
Huang bought the inexpensive video game console and started modifying it to explore what it could do. He built USB adapters, installed LEDs, and reverse engineered the security model. His real accomplishment though was hacking Microsoft’s black box to be able to run Linux--a fact our friends in Redmond were not especially pleased about.
What resulted was a legal fight to define whether such activities were acceptable in the eyes of the law under the caveat of a voided warranty. While the lawyers battled over whether the Digital Millennium Copyright Act (DMCA) could be used to prevent these kind of modifications, a much more interesting phenomenon was happening…hardware hackers, also known as makers, were becoming a connected global community.
Young computer hackers stand outside the New York Library August 13, 2001 in New York City... [+] protesting the arrest by the F.B.I of computer programmer Dmitri Sklyarov. (Photo by Spencer Platt/Getty Images)
Makers, Rise
Since those early days we have seen the makers go from homebrew shops to prominence in the minds of engineers, entrepreneurs, and consumers. This growth has resulted in industries designed to serve makers, providing 3-D printing, CNC, drones, robotics, embedded computing, and more to fuel new, creative, and surprising innovation across the planet.
While Huang was poking around with his XBOX, two such tinkerers were gearing up to start a new television show that couldn’t have been more perfectly timed--Mythbusters.
Led by the inimitable Jamie Hyneman and Adam Savage, Mythbusters has spawned 14 seasons, international success, and multiple tours across the world. I sat down with Jamie Hyneman to get his perspectives on the rise of the makers.
TV personalities Adam Savage (L) and Jamie Hyneman speak onstage during Discovery's 'MythBusters'... [+] panel at Discovery Communications TCA Winter 2015 (Photo by Alberto E. Rodriguez/Getty Images for Discovery Communications)
To say Hyneman has an interesting background is an understatement. He was raised on a farm, got a degree in Russian Linguistics, worked as a cook, and had a charter diving and sailing business in the Caribbean for several years. Ultimately though, he wasn’t satisfied.
As such, he decided to take the bold move to pursue his interest in special effects for film and later founded M5 industries that did work for such flicks as Disney’s Flubber, Monkeybone and Home Alone 3.
While his new profession satisfied his interest it also provided a playground for making things.
“When I got my foot in the door, I found that having access to a shop that has what you needed to build anything you could dream up, and not just static objects, was like, you have the idea, and you have the thing. I just exploded, was a pig in s**t, and never looked back,” says Hyneman.
Hyneman’s interest in building things was nothing new. Like many, his passion started in school, but like many eager young makers years back, it wasn’t exactly the coolest of interests.
“When I was in school, shop class was where the kids that weren’t good in anything to do with books went,” says Hyneman. “Over time, shop classes sort of disappeared or got marginalized in the states. I don’t really know why. Now with tech like 3-D printers and CNCs, shops have acquired a new shine.”
There is little doubt that the growth of tools has made making more and more accessible. A vast range of tooling for all elements of the manufacturing process has developed in recent years.
This has included the 3-D printer, popularized by RepRap and the MakerBot, which is useful for custom engineering cases and parts. There are the low cost programmable computers such as the Raspberry Pi and Arduino, many of which include vast arrays of sensors such as buttons, optical sensors, light sensors, gyros, accelerometers, thermometers, motion sensors, and more. We have seen CNC machines, digital modeling tools, compilers and software development kits, small form factor flash storage, low cost cameras, the cloud, and more.
A MakerBot Replicator Mini compact 3-D printer is shown working at the 2015 International CES in Las... [+] Vegas, Nevada. (Photo by Ethan Miller/Getty Images)
Hyneman believes that part of the growth of the maker revolution is because of these tools. “They open all sorts of doors, and it is having the same kind of explosive growth as cell phone tech has had.” He continues, “We are seeing robotics creep into all areas and become accessible, where it used to be something tedious that only the most persistent people could access. Instead of a novelty as it currently is, robots will be as common as cell phones and laptops.”
While I asked if there were specific technologies that excite Hyneman more than others, he said his view is more holistic than about picking favorites.
“I’m excited about all technology. None of them exist by themselves,” he says. "That is the great thing about the maker movement, all these things that allow us to build and invent and experiment feed back on each other and ourselves, and help us be more dynamic in our world and how we interact with it.”
Knowledge
While the tools in the maker’s shop have become cheaper, smaller, and more accessible than ever, tools are only one part of the maker story. We have also seen growth in knowledge and community.
Back in 2006 I was invited to join a unique gathering of people in the technology world at an event called FooCamp. It was hosted by O’Reilly, which published two of my books, and took place at their offices in Sebastapol, California.
I first met Dale Dougherty at FooCamp. While hardware hackers such as Huang and Hyneman were still something of a curiosity in the technology world, Dougherty saw an opportunity to create a magazine that showcased incredible makers and their inventions as well as providing blueprints for readers to create new things. Thus, Make magazine was born.
Make magazine soon became the connective tissue for much of the maker movement in a similar way 2600 magazine brought the hacker world together. Before long a maker community was forming around the world, connected by the Internet, creating and building their own tools, and showcasing their efforts in Make magazine.
Dougherty shared some his thoughts with me on unfolding maker movement over the last ten years.
“It's true that there have been people who tinker but it does seem to be moving from the margins of society to the mainstream,” he says.
While Dougherty shares Hyneman’s view that the tooling has made making more accessible, he believes it goes a step further.
“All of these changes are convergent, and allows more people to consider themselves makers,” Dougherty says. “Think about personal computers versus mainframes. Few of us consider ourselves computer scientists but we know how to use a computer and how to make it do things we find valuable in our lives.”
From Dougherty’s perspective this accessible element of feeling like a maker and being part of a maker culture has been essential in its growth.
“The key is not just having technology--it's having a community of makers doing projects and sharing them,” he says. "That's where maker culture comes from and the sense of agency -- hey, I can do it -- drives this culture.”
Dougherty isn’t wrong, and the evidence proves it. While Make magazine helped to popularize and showcase making, a global community has exploded around the world. This has resulted in special events called Maker Faire happening across the globe where makers get together to showcase their work, share ideas and techniques, and more.
An attendee creates an LG Sound & Light Board with guidance from an LG ambassador (Photo by Brian... [+] Ach/Invision for LG Electronics USA/AP Images)
The Maker Faire grew out of Make magazine when there was a natural interest in readers and other makers getting together in person. The first Maker Faire took place in San Mateo a year after the launch of Make. It featured six exposition and workshop pavilions, over 100 makers, and featured workshops, demonstrations and competitions.
The event was a success and inspired more Maker Faire events across the world. As an example, in 2013 alone there were 100 Makers Faire events across the globe in countries such as China, Japan, Israel, Australia, Spain, the U.K., Italy, Ireland, Scotland, Chile, France, Norway, Canada, and the Netherlands.
Innovation
Rather unsurprisingly, the rise of the makers has had a tremendous impact on innovation. While the Internet got people interested in building software and web applications, for many years the passion for building physical things lagged behind. The makers changed all of that.
This has resulted in a remarkable age where inventors and innovators have the perfect combination of computing (e.g. Raspberry Pi/Arduino and their sensors), software (Linux/Open Source/SDKs), manufacturing (3-D Printing/CNC), and data (e.g. Amazon Web Services/Big Data). What was still missing was the ability to get maker creations into the hands of others.
That changed with Kickstarter.
Founded by Perry Chen, Yancey Strickler, and Charles Adler in 2009, Kickstarter offered a simple concept. Anyone could propose a project that required a certain level of funding that the general consumer could sponsor. If the financial goal was met the money was paid; if not, no one paid a penny.
Yancey Strickler, co-founder and chief executive officer of Kickstarter Inc., left, speaks during... [+] the TechCrunch Disrupt NYC 2014 conference Photographer: Peter Foley/Bloomberg *** Local Caption *** Yancey Strickler
Kickstarter, and later Indiegogo, essentially created the app store for makers and inventors. I would encourage all of you to go and take a look at the range of projects looking for funding on both Kickstarter and Indiegogo; the level of innovation is mind-blowing.
Dougherty believes that this crowdfunding model has played a critical role in the growth of the maker movement.
“It's really important. Last year, Kickstarter told us that 10% of the makers at Maker Faire had run a campaign, and collectively they had raised $23 million,” he says. “I was at Maker Faire Shenzhen last month and every Chinese maker who had a product had run a Kickstarter.”
“This means that makers have access to small amounts of capital to start product development,” Dougherty says. “But perhaps the most important thing that crowdfunding does is help to develop a community for a product -- even before the product exists.”
Inspiring Young Minds
At the heart of the maker revolution have been core elements of creativity, exploration, social engagement, and sharing. Making has in a strange way tapped into the core of what makes people tick and a growing number of those people are kids.
“Kids really get excited by the opportunity to make,” says Dougherty. “We see all these kids at Maker Faire, and I think their parents get a sense of what this means to their children and want to encourage it.”
Dougherty believes that making provides hands-on, experiential learning. “It reflects how kids learn best and they are motivated to do it,” he says.
Dougherty hits the nail on the head. With the growth of the Internet and online knowledge we have seen kids learn in more experiential ways. Whether this was the wild world of computer graphics demos back in the 90s or building strange and amazing things in Minecraft today, kids love to learn by doing.
Dougherty believes this provides an opportunity, but one that needs leveraging.
“We need to provide makerspaces in schools and libraries so that children have access to the tools, materials and mentorship -- and can develop as makers.”
The Tetris two-wheeled lunar rover, developed by Hakuto team brings together much of the technology... [+] empowering makers. Photographer: Akio Kon/Bloomberg
Hyneman shares the same passion or getting kids into making, and he believes the tools open up many doors that never existed before.
“3D Robotics and others provide very open source and accessible (user friendly and not terribly expensive) ways of getting into anything remote controlled,” he says. "These things aren’t just about programming something, they involve a whole range of processes and materials, which opens doors to all sorts of other things that people use every day.”
The Future is Open
As a casual observer of the maker movement over the last twenty years, it has always impressed me how makers demonstrate the most wonderful elements of humanity, as outlined earlier: creativity, exploration, social engagement, and sharing.
The latter of these though, sharing, is a particularly important one to highlight. If we look at the history of the rise of the makers, sharing and openness has played a key role. Whether this has been openness of software, standards, knowledge, tooling, or even showboating, sharing and openness has been a common theme.
Back in the earlier years of the new millennium when Huang was hacking the XBOX, his efforts illustrated a natural tension between open and closed philosophies. Huang was eager to share his approach, his code, and results openly. This contrasted strikingly with one of the major proponents of closed, proprietary systems…Microsoft.
Open hardware and software is driving much of the maker movement. Photographer: Waldo... [+] Swiegers/Bloomberg
The maker movement has fought this battle with openness at its core and it drives the philosophy of most makers. Dougherty also shares this philosophy.
“Open and closed systems seem to be the Ying and Yang of the universe,” he says. “We will always have both, I believe. Perhaps closed systems are better for some business ventures. Yet, open systems are necessary and I think they can thrive even while competing against closed systems.”
He concludes, “The maker movement is based on an open ecosystem, where open source projects provide standard components such as Arduino. Openness means anyone can play and contribute."
Hyneman goes a step further: “Open source and open hardware are going to happen one way or another. People might as well just embrace it, and ride the wave. If they don’t, they will be left behind.”
Both Hyneman and Dougherty are spot on. Part of the reason why the maker revolution happened has been a sense of openness, community, and sharing.
Who knows what is next, but there is one thing for sure: we will all get to watch the makers experiment, explore, and create at every step of the way. The rise isn’t over yet…
|
dae96f8c2cfd99ec9aa9be536499e5a5 | https://www.forbes.com/sites/jonobacon/2015/11/16/red-hat-ceo-and-microsoft-evp-on-the-evolution-of-open-source-and-business/ | Red Hat CEO and Microsoft EVP On The Evolution Of Open Source And Business | Red Hat CEO and Microsoft EVP On The Evolution Of Open Source And Business
It is hard to understate the impact open-source has had on technology. The idea's roots go back to the dawn of the internet boom 20 years ago, a time when nobody was getting fired for buying IBM. Open source was the new kid on the block: young, irreverent, and ready to shake up long-held convictions.
In the time since open source has gone on to power devices, infrastructure, consumer electronics, government services, and more: technology written, created, and shared by thousands around the world openly. According to Black Duck Software, an estimated 78% of companies are running open source. This is certainly evidenced by a casual wander around any one of the many technology conferences in the world: the mantra is consistent...open, open, open. Today the business of open is alive and well.
The Linux Foundation is a popular trade organization that encourages and promotes open source... [+] business development. Photo Credit: https://www.flickr.com/photos/63298803@N08/5752387817/
This story certainly didn't start this way though. Back in those early days of open source when geeks in basements were compiling code from the Internet and startups were exploring new models, the business world largely rejected the principle and practice of open source. The common epithets for disruptive technologies were thrown around; that open source was considered insecure, costly, difficult to manage, badly supported, and not a serious mainstream option. Microsoft went a step further with then-CEO Steve Ballmer describing the poster-child of the open source revolution, Linux, as "a cancer that attaches itself in an intellectual property sense to everything it touches.
One of the earliest pioneers in the open source world was Raleigh-based Red Hat. Formed in 1993, only two years after the formation of Linux itself, Red Hat has gone on to be a powerhouse in open source business. Sporting over 7,900 employees and nearly $2 billion in revenue, Red Hat is the most profitable open source business in history. The company has not only gone on to financial success but to this day it has competently sailed the cultural waters of the very developer communities that build the technology that powers Red Hat and their customers. Red Hat is, for all intents and purposes, a successful leader both culturally and financially.
Much of Red Hat's success has been due to leadership from CEO, Jim Whitehurst. Formerly a management consultant and then COO of Delta Airlines, Whitehurst joined Red Hat in 2008, throwing himself into an unusual place both in terms of the business and the operating culture of the company. The company has felt his success though, with Red Hat stock moving up three-fold and staff speaking generally positively of his influence. As such, Whitehurst has certainly seen the business of open source evolve, even going so far as to document much of this open culture in his new book, The Open Organization. Whitehurst shared with me how this technology evolution has manifested at Red Hat.
"Going back just five years, open source was all about offering cheaper alternatives to proprietary software. Today, it's moved from commoditization to open source being about faster innovation. Innovation is happening first in open source. If you're doing any type of a scale-out infrastructure, it's probably going to be open source. If you're looking at implementing a DevOps process, you’ll want to be using open source. If you're going to do anything with big data, it's going to be open source. And, of course, the cloud was born using open source software".
Jim Whitehurst, Red Hat CEO, keynotes the Red Hat Summit. Photo Credit:... [+] https://www.flickr.com/photos/redhatmagazine/4727896383/
While Whitehurst contends that commodotization and cost effectiveness has been an initial drive, one of the major benefits felt by companies who switch to open source solutions is a sense of agility and lowering time to market for their solutions.
"All of this has led to open source becoming not just accepted, but the preferred choice for organizations that want to become more agile and flexible. We’ve gone from organizations looking at open source and asking 'what is this' and 'why should I use it,' to organizations actively looking for ways to base their entire IT infrastructure on open source software. That presents great opportunities, not only for Red Hat, but for the enterprise in general."
For many years open source pundits and supporters have shared Whitehurst's enthusiasm for opportunity in this interesting new culture of technology, but they have often been scattered in crisply defining what the primary potential is. Whitehurst's view though is clear: it is innovation.
"Open source has provided an unparalleled degree of innovation. For example, the cloud is a culmination of the collaborative efforts that were created by the open source development model. What we know as the cloud today simply would not exist without open source. This type of collaboration and innovation has made everyone change their models. It’s no longer acceptable to simply stand still and wait for a five-year update to that software that you purchased back in the day. In order to compete, you have to constantly iterate. Open source allows organizations to do this, because it makes the procurement of new and innovative features easier to implement and deploy. Everything is right at your fingertips, and constantly being worked on and improved by some of the world’s leading developers."
Whitehurst clearly challenges the monolithic IT deployment strategy that was popularized in the nineties, but interestingly, and from the position of a CEO of a company that wants to naturally drive brand-loyalty and recurring revenue from it's customers, he sees the common proprietary vendor-lock tactic of those nineties stalwarts as the old way of doing business.
"Thanks to open source, IT is no longer locked into using monolithic, proprietary software from a single vendor. They’re free to choose the software that suits their needs. That helps their business become more streamlined and efficient."
What is evident from his perspective is that the business of open source has already evolved by providing opportunity, innovation, and agility. For companies able to describe and demonstrate these benefits in real-world products and solutions, there is clear opportunity ahead. This is by definition, an interesting mix of culture, policy, and technology.
In Whitehurst's comments above he also touches on a subtle but important point about the evolution of open source business: namely, the relationship between vendors and customers. One of the earliest cited benefits of open source back in the late-ninetees was the opportunity for IT staff to break away from the vendor lock-in we discussed earlier. In many cases the reality was quite different though: the theoretical lock-in free world and the reality of the business delivered by popular IT vendors were often at odds. For many years, fingers pointed to certain businesses for trying to lock their customers in, with Microsoft often cited as a major opponent of this new open way of building technology and delivering infrastructure and services.
Oh, how things have changed. After the difficult post-monopoly legal battle of the early noughties, Microsoft started dipping it's toes more and more into the world of open technology. In 2006 it announced the Microsoft Open Specification Promise, essentially a Convent Not to Sue, and the same year Microsoft partnered with Novell to help Linux and Windows work better together. As the years passed, a regular stream of news suggested Microsoft were delving more into open source and in 2012 Microsoft Open Technologies was formally launched to focus on open infrastructure, standards, participation and more. In recent years Microsoft has gone on to merge their Open Technologies group into the wider company, their CEO has affirmed that 'Microsoft Love Linux', they have released a considerable amount of code on GitHub, and they even partnered with various Linux companies to have Linux run effectively on their Azure cloud service.
Scott Guthrie, Microsoft EVP, speaks at MIX10. Photo Credit:... [+] https://www.flickr.com/photos/mixevent/4435094885/
Scott Guthrie is an Executive Vice President at Microsoft, with over 18 years of service to the company. He shared with me that while Microsoft has never been seen as an open source company, they have been investing in open source for more than a decade.
"Our pace in investing in open source has been rapidly increasing, particularly in the Cloud + Enterprise division that I run, and I think that is largely because of the pace of cloud innovation itself. Our customers are asking for choice and flexibility so they can take advantage of all the benefits of the cloud, and open source is one way we enable them to do this."
Guthrie elaborates on this body of work, "We’re integrating open source into our technologies, like Azure HDInsight (Hadoop-as-a-service) and Azure Data Lake, which uses YARN + HDFS to offer hyper-scale data repository in the cloud. We’re making sure Linux is a first-class experience on our Azure cloud, and now more than 1 in 4 of our Azure Virtual Machines are on Linux. We’re also releasing some of our core technologies as open source, like when we open sourced the full server side .NET stack and expanded .NET to run on the Linux and Mac OS platforms. We’re also increasingly contributing to open source technologies. In fact, we’re top contributors to Linux and the Apache Hadoop project. This applies not just to our technology, but also to our hardware specs. For example we joined the Open Compute Project last year and shared the MS cloud server spec, essentially giving the community access to designs for the most advanced server hardware in MS datacenters, and we’ve been adding new specs since."
According to Guthrie, it seems Microsoft reached a similar conclusion to Whitehurst that open source has become less about the artifact of open access to source code, but more about providing customers with the ability to innovate more quickly.
"Just five years ago, we were looking at open source primarily as a technology enabler. Our support for Linux on Hyper-V, or for PHP on Windows Server, were examples of that. But as I mentioned earlier, we now also look at open source as an one of the enablers for innovation. We leverage the technology and ecosystems into our own products, not just to accelerate our own go-to-market efforts, but more importantly to accelerate the innovation of our customers so they can keep pace with the incredibly rapid technology changes that the cloud itself is driving."
It is this subtle switch in perspective that has helped to transform Microsoft into a more modern company. To Guthrie's point, Microsoft are not merely grudgingly supporting competing technologies to maintain some customer contracts, they have seemingly realized that this is how the business of open source has evolved.
"In my conversations with our customers, I think it’s simple. They want to use the tools and technologies that they’re the most familiar with and they want it to work across their environments without a lot of costly or time consuming customization. Sometimes that is going to be with commercial software, and sometimes with open source."
Whitehurst shares Guthrie's sentiment but expands to suggest that the market opportunity with open source is not just about choice but also agility. "Open source has also increased the expectation that businesses can be more agile, innovative, and responsive to change. It’s given them the impetus to continually move forward. That’s something they may have wanted to do for years, but didn’t have the tools to make it a reality. Now, they do."
IBM are another organization that has invested heavily in open source. Photo Credit:... [+] https://www.flickr.com/photos/jul/104847402/
Whitehurst makes an important point. One of the consistent benefits seen with open source over the years has been that as the developer-base for a project grows, invariably the technology becomes more capable in different deployment conditions as developers will strive to serve their specific needs well and share it with the rest of the community. Whitehurst touches on the importance of this adaptive evolution of technology.
"One of the biggest opportunities is being able to move from projects being driven by web-scale users to those with significant enterprise contributions. For example, Red Hat is heavily involved in a project called ManageIQ, a cloud management platform. It’s getting great traction around a problem that enterprise IT has, which is being able to gain a better understanding of the entire infrastructure, from the datacenter to virtualization, and being able to more easily manage that infrastructure."
New Business Models
Arguably one of the biggest challenges that companies focused on open source have had to confront is what their business model should be. There has been much experimentation in this area with different approaches to licensing, services, support, trademark management, and other approaches. Red Hat has navigated these waters carefully with a mixture of subscription services, support, and engineering, but Whitehurst still believes there is work to do.
"We need to continue to build business models that generate returns for open source companies. Open source, by its nature, is a completely different type of sales model. Essentially, what you’re selling is free, so organizations need to find ways to make it profitable. For us, that’s through subscription revenue."
This is the area where Microsoft's evolution is potentially the most impressive. As a company with just shy of 120,000 employees around the world, Microsoft is a big ship to steer in a different direction, and the efforts of executives such as Guthrie and CEO Satya Nadella are clearly making an impact.
While there are many positive stories of successful open source businesses, there is a clear sense in the open source world that there may be new and innovative models still to come. Interestingly, and as Whitehurst and Guthrie both point out, modern open source companies are now arguably held to a significantly higher standard than the traditional enterprise vendors 20 years ago. Customers now expect cost-effectiveness, agility, innovation, and the ability to utilize what is important to them across a range of vendors and projects. Customers are simply not reacting well to the one-stop-shop approach to technology that was the norm in the pre-Internet era.
Thus, while this story is certainly not over, it is clear that open source has not just evolved how we approach, explore, and consume technology, but is also affecting how the nature of business operates in itself. While there are clearly many lessons learned, class is still very much in session.
|
736533f6ddb67bc0cbf2648f8101401d | https://www.forbes.com/sites/jonpicoult/2019/11/25/what-fred-reichheld-taught-me-about-the-true-value-of-net-promoter-score/ | What Fred Reichheld Taught Me About The True Value Of Net Promoter Score | What Fred Reichheld Taught Me About The True Value Of Net Promoter Score
“Net Promoter” sure has its share of detractors these days. But those critiquing the measure are overlooking one of its greatest (if not widely discussed) benefits.
Net Promoter Score (NPS for short) was conceived by Fred Reichheld (a Bain & Company consultant) and introduced to the world in 2003 via his seminal Harvard Business Review article, “The One Number You Need To Grow.”
NPS was heralded by Reichheld and his colleagues as the quintessential metric for gauging customer loyalty across many industries. It was a simple measure, yet demonstrated a strong correlation with repeat purchases and referrals, and consequently, business growth.
In recent years, Net Promoter’s popularity has surged, becoming one of the most widely-used customer experience measures, utilized by small businesses and billion-dollar corporations alike.
The “likelihood to recommend” question that’s at the heart of Net Promoter is a now ubiquitous query in customer surveys, and one with which most all consumers are familiar (even if they’ve never heard of NPS).
Net Promoter’s nomenclature – its “Promoter / Passive / Detractor” shorthand for characterizing customer loyalty levels – has become standard vocabulary in the halls of many organizations, not to mention annual reports and earnings presentations.
MORE FOR YOUFor Happiness In Tough Times, Be More GratefulVerizon Business’ New CRO: The Secrets To Growing A $30B BusinessResilience In Leadership: How To Lead And Win Despite Change And Obstacles
With greater adoption, however, has come greater scrutiny of Net Promoter. This was perhaps best illustrated by a decidedly mixed review of the measure in a recent Wall Street Journal article (“The Dubious Management Fad Sweeping Corporate America”).
No performance metric is perfect, Net Promoter included. Many critiques of the measure, however, target weaknesses that relate less to the metric itself and more to how organizations have chosen to (incorrectly) implement it (e.g., companies obsessing over the NPS numerical score itself, rather than the customer feedback which underlies it).
But what gets lost in the maelstrom of Net Promoter critiques is the business philosophy Reichheld has long cited as the inspiration behind the metric: The idea that excellence in business comes from “enriching the lives we touch,” be it customers, colleagues, employees, or any other stakeholder.
What gets lost in the maelstrom of Net Promoter critiques is the business philosophy which inspired the metric – “to enrich the lives we touch.”
The structure of the Net Promoter scale, and the methodology used to calculate the NPS score, perfectly reflect that philosophy. The goal is not to satisfy those with whom you interact (“Passives” in Net Promoter nomenclature). The goal is to impress them – to create a “Promoter” by delivering an interaction so intensely positive that it all but guarantees people will want to come back for more (and tell others about the experience).
And how exactly does one do that? How does one foster such a positive reaction that cultivates intense loyalty?
You guessed it – by enriching the lives of the people with whom you interact. By shaping every interaction, inside or outside the workplace, so people feel better after they’ve encountered you, as compared to before.
This is the true value of properly implemented Net Promoter programs (and one that so many critics – and even some adopters – of NPS overlook). It’s the behavioral guidance that the measure provides. It’s the picture it paints of what “right” looks like. It’s the motivation it delivers to go the extra mile.
The sheer power of that aspect of Net Promoter became clear to me a decade ago, thanks to a personal lesson delivered by none other than Fred Reichheld himself.
It was 2008 and I was preparing to launch what would become Watermark Consulting, the customer experience (CX) advisory firm I lead today. In an effort to better understand the market for CX consulting services (and whether there was a place for a new entrant), I did what any good entrepreneur does – I networked. I reached out to key people in the industry, seeking their advice and counsel, hoping to learn from those who had tread the path before me.
Most of the people I reached out to never responded (instead giving me a master class in e-Snubbing). But among the few that did reply to my messages was the most renowned luminary who I had the audacity to contact: Fred Reichheld.
I had divined Reichheld’s Bain & Co. e-mail address, as any good sleuth would, and within twenty-four hours of sending him a message, up came his response in my in-box. He hadn’t delegated the reply to someone else; it was quite clear he had personally written it. He was appreciative of my inquiry and wrote a couple of paragraphs with suggestions for me – advice that was genuinely helpful.
Here was this celebrity in the study of customer loyalty, a man famous around the world for his thought leadership, and yet he took the time to personally and thoughtfully respond to a message from me – a nobody.
Why on earth would he choose to do that (especially considering I was shunned by so many CX experts who were far less eminent than Reichheld)? It’s because he was walking the Net Promoter talk, trying to enrich the lives of everyone with whom he interacted.
From that day on, I became a “Promoter” of Fred Reichheld – a raving fan, if you will. I’ve never met the man, I’ve never communicated with him since. But what stands out in my memory is the simple kindness that he demonstrated in responding to my inquiry and sharing some helpful advice. The interaction was, in a word, enriching.
As Reichheld, the father of NPS, demonstrated so convincingly to me, this is the true value of Net Promoter, and it’s something that gets overshadowed by the endless debates over the accuracy, relevance and predictive power of the measure.
Net Promoter is about orienting an entire organization (and individual behaviors) around the noblest of purposes: to enrich the lives of the people around you.
Who can possibly find fault with that?
Like this article? Sign up here for Jon Picoult’s monthly eNewsletter and get customer experience insights and tips delivered right to your inbox.
|
c5566c3cf146d62a4d5baa5d914a06ae | https://www.forbes.com/sites/jonpicoult/2020/03/18/during-difficult-times-effective-leaders-share-the-burden/?sh=71a7966a1e02 | What Effective Leaders Do During Difficult Times | What Effective Leaders Do During Difficult Times
Each year on the Wednesday before Thanksgiving, where could you find Southwest Airlines’ legendary co-founder and CEO, Herb Kelleher? On the tarmac, of course, helping the ground crew load and unload baggage onto planes during what was the busiest travel day of the year.
Kelleher appreciated the importance of leaders showing solidarity with their employees, particularly during challenging times. That spirit was echoed last week, when current Southwest CEO Gary Kelly announced he was taking a 10% pay cut in light of the business challenges created by the spread of COVID-19. Other airline executives followed Kelly’s lead, but he was the first to step forward with such a gesture.
Chipping in to help employees during difficult times is a hallmark of effective leadership. It helps to humanize executives in the eyes of employees, but also sends an important message that, however bad a crisis is, we’ll overcome it by working as a team.
At Vanguard Investments, that executive “roll up your sleeves” approach is actually institutionalized via the company’s Swiss Army – a customer service “reserve team” that’s called into duty to help maintain service levels during periods of high investor call volume. The people staffing the Swiss Army aren’t regular call center representatives; they’re specially trained Vanguard executives and managers.
In September 2008, for example, as investment bank Lehman Brothers collapsed and the U.S. financial industry began to implode, Vanguard CEO Bill McNabb was in the company’s Valley Forge, Pennsylvania service center, fielding calls from anxious investors. Just imagine how that must have made his front-line call center representatives feel.
MORE FOR YOUIs President Lopez Obrador Destroying Mexico?Emotional EQuity: How Leaders Use Empathy To Inspire Successful TeamsManage Your Boss With “The Rule Of Three”
Working in the trenches with employees is a smart move for organizational leaders at any time, but even more so during challenging times.
Indeed, whether it’s working alongside stressed employees, or volunteering to take an executive pay cut during a financially challenging period – these types of actions send a clear, unmistakable signal to the workforce: We’re all in this together.
In this sense, how a particular business crisis originates is almost immaterial. It could be an isolated, company-specific event, such as a product recall, or it could be a worldwide disruption caused by a global pandemic. The important thing is how leaders respond in those situations, and the signals they send to their organizations via their own personal behaviors.
Most businesspeople are problem solvers at heart. During times of crisis, our natural inclination is to fix the problem, to stop the hemorrhaging, to focus on the mechanics and logistics of business recovery. And while those are all very important activities, it’s critical to complement them with smaller, tone-setting tactics which, on their face, might seem less strategic and “unworthy” of an executive’s time.
Depending on what industry you’re in, that could mean helping front-line staff take some incoming calls, or assisting warehouse personnel in boxing up new orders, or chipping in to help a team member complete an urgent task. These are all small gestures that can leave an indelible impression, especially on employees who are stressed and anxious about what the future holds.
All too often in the business world, there is a chasm between the corner office and the cubicle, between the top brass and the front-line. Particularly during difficult times, it’s essential for organizational leaders to bridge that chasm, and to show the workforce what it really means to be a team player.
Like this article? Sign up here for Jon Picoult’s monthly eNewsletter and get customer experience insights and tips delivered right to your inbox.
|
f1f3cc9ecfc08690039219dd66438e91 | https://www.forbes.com/sites/jonpicoult/2020/08/31/why-work-from-home-might-not-work--the-looming-risk-for-employers/ | Why Work-From-Home Might Not Work: The Looming Risk For Employers | Why Work-From-Home Might Not Work: The Looming Risk For Employers
getty
Practically overnight, the Covid-19 pandemic triggered a monumental shift in how people work. Corporate offices were out; home offices were in. Many companies were pleasantly surprised how smoothly that transition went, observing no adverse impact on productivity, no major impediments to staff getting their jobs done.
Given the apparent favorable results of this pandemic-imposed work-from-home (WFH) experiment, some companies think they’ve seen the future of work, and they’ve begun realigning their operations accordingly.
Back in May, Nationwide Insurance announced the permanent closure of five corporate offices, shifting the majority of employees in those sites to a work-from-home model. Facebook is planning to have half of its employees work from home permanently. Other firms are also following suit.
Companies are no doubt salivating at the cost savings that would come with having more people work from home. All kinds of overhead expenses associated with centralized offices would disappear: rent, utilities, cleaning services, food services, building maintenance and groundskeeping.
However, in their zeal to capture these savings and shift to a decentralized workplace, companies may be overlooking some dark clouds on the horizon.
Sure, organizations are cognizant of the collaboration challenges that come with a work-from-home model. They can convince themselves, though, that it’s nothing a technology tapestry of Zoom, Slack and other tools couldn’t overcome.
MORE FOR YOUManage Your Boss With “The Rule Of Three”How Cryptocurrency Will Transform The Future Business ForeverIs President Lopez Obrador Destroying Mexico?
The true blind spot for companies, however, may be in their belief that employees prefer working from home.
There’s no question there are many advantages to that model, as newly remote staff can attest: Commutes measured in steps rather than miles, fewer interruptions from coworkers passing by, more time to spend with family.
However, the novelty of working from home, and employees’ elation with their newfound “freedom” from the traditional office, may soon wear off.
Play the work-from-home movie out a few frames and you’re likely to find a whole subset of the population who will have trouble doing it long-term. That’s because, after the initial WFH honeymoon, many employees will struggle with the arrangement. They’ll have trouble figuring out how to separate work from home, when work is at home.
That struggle can be invisible to the employer, because it manifests itself in a way that’s favorable to the company (at least in the short-term). Employees seemingly become more productive – not necessarily because they’re more efficient, but because they’re working longer, as the line between personal and professional life blurs.
The start of the workday creeps earlier; the end edges later. The demarcation between work and home disappears, accelerating a shift that began when digital devices created an “always-on-duty” workplace culture.
Executives like that work-from-home productivity boost. Even more, they like the economics of it, since – when made permanent – it’s accompanied by a reduction in office overhead.
Those economics may not look as good long-term, however.
Employees who are effectively given no alternative but to work from home may feel compelled to deselect from that arrangement. They’ll instead seek employment with firms offering a traditional work environment, where it’s easier to compartmentalize one’s personal and professional lives. The costs of such turnover can be profound, not just in hiring and training, but also in terms of the health of the customer relationships those employees oversee.
Avoiding that outcome requires understanding that employees are not a homogenous group. They have distinct needs, wants, and work habits – all of which underscores the importance of offering flexibility and choice in work arrangements.
Working from home won’t work for everyone. Organizations would be wise to recognize that before they rush into dismantling their physical offices. Because when you shut the door on those offices, you might also be shutting the door on the good, quality talent that wants to work in them.
Like this article? Sign up here for Jon Picoult’s monthly eNewsletter and get customer experience and leadership insights delivered right to your inbox.
|
b96a29c734448aab1d22e1d9ac03e6d1 | https://www.forbes.com/sites/jonpicoult/2021/02/26/the-key-to-continuous-business-improvement--think-like-the-ntsb/ | The Key To Continuous Business Improvement? Think Like The NTSB. | The Key To Continuous Business Improvement? Think Like The NTSB.
NTSB "Go Team" investigators analyze an aircraft engine failure. Getty Images
Airlines are rarely held up as models of customer experience excellence, but in one important respect, the aviation industry actually deserves that recognition.
At many airlines, the traveler experience leaves a lot to be desired. People are subjected to a whole host of annoyances and indignities, from baggage charges to ticket change fees, from cramped seating to overbooking.
But one aspect of the airline customer experience is remarkably good, and consistently getting even better: The industry’s discipline in identifying and addressing the causes of accidents.
Say what you want about the awfulness of air travel, but it does have one undeniably redeeming quality: It’s really safe. While commercial airline accidents obviously garner a lot of media attention, the fact is, they are extremely rare. Accounting for just 0.006 deaths per billion miles of travel, flying is the safest form of transportation out there, far safer than driving.
We were reminded of this just days ago, when a United Airlines plane suffered an engine failure moments after departing Denver International Airport. Pieces of the engine rained down on a Denver suburb. Fortunately, no one was hurt on the ground, nor on the plane, which quickly returned to the airport and made an emergency landing.
Within hours of the incident, the National Transportation Safety Board’s (NTSB) “Go Team” was mobilized, and it’s from their tireless work that all businesses can learn a valuable lesson.
Established in 1967, the NTSB is an independent government agency tasked with investigating all civil aviation accidents, as well as major incidents involving with other forms of transportation (such as train derailments).
The Go Team is a cornerstone of the NTSB’s investigative process. Ready to travel anywhere in the world at a moment’s notice, the Team is comprised of NTSB staff representing a variety of specialties – aircraft structure, engines, hydraulic systems, crew performance, and even air traffic control. They all descend upon the accident site to piece together what happened and to determine what went wrong.
MORE FOR YOUManage Your Boss With “The Rule Of Three”Is President Lopez Obrador Destroying Mexico?How Cryptocurrency Will Transform The Future Business Forever
Within a matter of days, the NTSB issues a preliminary report based on the Go Team’s findings. (An official, final report can take months if not years to publish, depending on the complexity of the incident.)
But here’s the most important part: Based on its investigation, the NTSB releases safety recommendations, which can then be turned into “Airworthiness Directives” by the Federal Aviation Administration (FAA). Those directives, which can be issued on an emergency basis if necessary, establish legally enforceable rules which can dictate anything from aircraft design changes (which would be handled by the manufacturer) to maintenance procedure enhancements (which would be handled by the airline).
What does that disciplined process of investigating aviation accidents and addressing their root causes yield? Decades of consistent improvement in the civil aviation fatality rate, with the 5-year moving average hitting an all-time low in 2019 (despite a marked increase in the number of flights flown over the same period).
Global Fatal Aviation Accidents Per Year (1946-2019) Aviation Safety Network
Now, imagine if the above graph were charting the failure rate for your company’s customer experience, perhaps measured through product defects, complaints or some other indication of an experience gone wrong.
Because that’s really what the NTSB Go Team (and other countries’ aviation safety agencies) do. They root out the underlying cause of a failure in the experience. Granted, in the case of the NTSB, they’re looking at failures that can be gravely serious, resulting in harm to dozens if not hundreds of passengers. But the value of the NTSB’s approach is applicable to any business, regardless of product or service sold.
Think of it this way: There are a finite number of reasons why an aircraft will suffer an operational failure. By rigorously investigating every failure, and directing its aviation partners to pursue remedial action, the NTSB and FAA have gradually narrowed the list of potential failure points. Hence the remarkable and steady long-term decline in accident rates.
The same logic applies to your business. There are a finite number of reasons why your customer experience may fail, from a product design flaw to an outdated website link to an inaccurate instruction sheet. It may be a long list, but it is a finite list.
You would be remiss then, if you didn’t take the opportunity to investigate failures when they occur, pinpoint the root cause, and take action to address the underlying issue. Only by doing so can you start to check items off of that finite list, and begin removing potential sources of experience failure from your customers’ lives.
To bring the NTSB’s proven approach to your organization, keep three things in mind:
Invest in investigation. When experience failures arise, people’s focus is (rightfully) on solving the problem for the affected customer. Once that’s done though, typical organizational behavior is to just move onto the next task – answering the next call, resolving the next complaint, manufacturing the next widget. Resist that temptation. Culturally, people in your organization must understand that an essential part of experience recovery is asking yourself, “How did my customer even end up in this situation?” Turn insights into action. It doesn’t help anyone if a field sales rep or a call center agent figures out the root cause of a customer experience failure, but then doesn’t have an outlet to communicate that to people who can do something about it. After all, the NTSB’s investigations would be pointless without their safety recommendations and the FAA’s associated Airworthiness Directives. Make sure there is a clear avenue for your staff to share their root cause findings with those who can drive change in the organization, such as a manager or an internal continuous improvement team. Make it about progress, not punishment. Interestingly, conclusions from an NTSB investigation cannot be entered as evidence in a court of law. That was done by design, as the architects of the NTSB wanted the organization to be viewed as an independent party, focused on preventing future accidents, not facilitating litigation. In the business arena, staff need to be forthcoming in order to effectively assist with root cause analysis. If they sense that the exercise is a punitive one, they’ll likely be reluctant to participate in a genuine way. Keep it constructive, with an emphasis on continuous improvement.
Every company, even legendary ones, has to occasionally deal with customer experience failures. What separates the good from the great, though, is how the organization approaches the resolution of those issues. Does it fix the problem for just one customer, or does it address the problem for all customers in the future. The NTSB has certainly demonstrated its proficiency in the latter approach, chipping away at root causes and turning air travel into the safest transportation experience on the planet.
So, the next time your organization encounters a customer experience failure, ask yourself, “Who’s on our Go Team?” Whether it’s a responsibility that lies with a dedicated unit, or an accountability that’s embedded in every staff member’s role – ensure this investigative work consistently gets done, because it’s that discipline which will keep your business flying higher.
Like this article? Sign up here for Jon Picoult’s monthly eNewsletter and get customer experience and leadership insights delivered right to your inbox.
|
c1c770e4354912a2c6bd4d95ea08bfc5 | https://www.forbes.com/sites/jonspringer/2014/06/02/real-estate-success-in-sri-lanka-with-ivan-robinson/ | Real Estate Success In Sri Lanka With Ivan Robinson | Real Estate Success In Sri Lanka With Ivan Robinson
Ivan Robinson of Lanka Real Estate has been in the real estate business in Sri Lanka since 2002. Before relocating to Sri Lanka, 48-year-old Mr. Robinson previously worked as a real estate agent in his native London and then the Cote D’Azur. Between 1996 and 2002, he visited Sri Lanka every year on holiday before the country finally wooed him into moving there. He exchanged 23 years of life selling real estate on the French Riviera for life in Sri Lanka, and he says with a grin, “it’s better.”
Beginnings
When he began his company in 2002, there were no real estate agents based anywhere except Sri Lanka’s capital of Colombo. Today, you will find many real estate offices in Galle where his primary office is located. Galle, located on the southwestern point of Sri Lanka, has become favored by ex-pats, though other ex-pats consider Kandy and the Hill Country of Southern Central Sri Lanka for its emerald beauty. The price per acre of land in Galle is up more than 20 times since Mr. Robinson moved to Sri Lanka in 2002. He says a lot of his clients were people who bought real estate early in Bali and then came into the market in Galle early with their windfall gains.
Today, many people are interested in the potential real estate growth in the port cities of Colombo and Hambantota. A quick study of global shipping lanes reveals the truth that has always been: every major shipping lane between Europe & Asia, the Middle East & Asia, and Africa & Asia, all pass about 11 miles off the coast of Sri Lanka. Chinese interests are investing heavily in the expansion of these two ports cities, including a plan to reclaim 575 acres from the ocean to add real estate to Colombo.
Another beautiful morning in Sri Lanka (Photo: Sebastian Posingis)
Disruptions’ Denouement
The end of the 26-year Sri Lankan Civil War has made dreams of both port growth and broader economic expansion an imminent reality for Sri Lanka. The end of the war is also why Mr. Robinson looks to the country’s east coast for the places one can find the best real estate investment opportunities on the island today. Specifically, he looks to property to the north and south of Pasikuda.
The civil war was largely in the north and east of Sri Lanka. This has left the sandy beaches of eastern Sri Lanka in pristine condition waiting for both the investment and tourism that could not exist during the war. Ensuring clean titles for his clients is an important part of Mr. Robinson’s work in all of Sri Lanka, but most challenging the areas most impacted by the civil war. Many land holders left their land during the civil war. Those who stayed behind during the war – or others that swooped in early after the war – have often created a false document chain showing property ownership. The Sri Lanka Criminal Investigation Department has set up offices in affected regions to assist people as the original landowners continue to return.
Despite the additional need to ensure that land purchased is free of land mines, Mr. Robinson estimates prices for real estate along the eastern coast have risen from $15,000/acre during the war’s last year in 2009, to $100,000/acre today. The government predicts national tourism to rise from 1 million annual visitors in 2012 to a target of 2.5 million annual visitors by 2016. The potential for developing and profiting from beachfront property is apparent, and something Mr. Robinson himself is investing in.
Mr. Robinson also believes there are still good values to be had in the Hill Country of Sri Lanka’s center. Many of his wealthiest clients like this region for its incredible natural beauty and proximity to the spiritual Buddhist center of Kandy where $125,000/acre should get you a nice view of the town. Further into the hills from Kandy, one could consider a tea plantation property that might be as low as $10,000/acre “with a house in bad condition.”
There are still long stretches of seldom used beaches in Sri Lanka (Photo: Sebastian Posingis)
Controversy And Clients
Mr. Robinson’s success is not without it’s detractors. The ability of his lawyers to find loopholes that helped his clients minimize the taxes they pay on their real estate purchases has drawn stiff accusations in some local media outlets. Mr. Robinson responds to these articles from 2005 and 2010:
Those articles were set up by people who suffered from our endeavor to bring transparency into the local property market. When we started our personal property investments in Sri Lanka, after working in the London property business and then in the South of France, we soon understood that local ‘brokers’ took their commission as a ‘cut’ rather than commission-based. When we would go to sign off our investments at the notaries' office we would find out that the broker would be getting sometimes as much as 50% of the consideration! By introducing a commission-based fee we upset the local brokers which resulted in this libelous article.
Several Sri Lankan businessmen were spoken to regarding this interview that do not approve of Mr. Robinson’s business practice of helping his clients use all legal means available to circumvent some taxes. None of those negative on Mr. Robinson would speak on the record. They claimed that what Mr. Robinson advises his clients to do is pure tax evasion, ignores the spirit of laws on the books and in their opinion as businessmen is not legal or should not be.
To the contrary, a number of Mr. Robinson’s clients and colleagues think he does quite well by them. Englishman Stephen Page, a professional investor in technology startups, was the most clear on what he sees as the nuance of Sri Lankan real estate law: “the government clearly does want inward investment but clearly needs to be seen to be protecting their own people.” His experience has been that the government of Sri Lanka is “quite supportive” of real estate investment from long-term investors and that the legal structures for investing in real estate “as in any country like this are political.” Mr. Page has bought three plots to-date from Mr. Robinson including engaging Mr. Robinson’s support to build an apartment block.
Mr. Page says that when he first went to Sri Lanka, “we met quite a few [realtors] and he was the best.” After successful investments in the southwest of Sri Lanka with Mr. Robinson, Mr. Page is considering further investment in eastern Sri Lanka as well. Mr. Page finds Mr. Robinson to have local business partners and attorneys who have been of good quality. Mr. Page adds that he has to-date used three different legal structures for his three purchases specific to the circumstances of the property, all of which have been above board and worked out well.
Another English national and CEO explained his experience with Mr. Robinson:
Ivan, and his team, have always been extremely helpful and professional in their conduct. They are easy to deal with and as far as I can tell have been very clear and conscientious on everything… [T]hey do navigate a somewhat tricky path in, for me, an unknown market… it was only in hindsight that I discovered some of the more difficult areas of their negotiations on my behalf… I bought a tea plantation so in effect I bought a business [with] a property component to the transaction and therefore I trusted his advice... I did set up an investment company to acquire the trading company on Ivan’s recommendation.
Several local law firms were spoken to for this article, all very positive about Mr. Robinson, Lanka Real Estate and the marketplace for foreign investors. An exemplary law firm said:
We have pleasure in stating that our dealings with Lanka Real Estate have been most professional. They are extremely thorough in the background research done regarding the lands marketed by them. Further, they are knowledgeable on the documentation required for concluding any transaction. They bear a good reputation in the market.
Measures And Law
While investing in land from port cities to beaches to rubber plantations is possible, there are important considerations for potential investors. Real estate in Sri Lanka is measured by the perch which is 270 square feet. Land ownership is limited to 50 acres per individual owner. Property laws are a Roman-Dutch maze. Mr. Robinson recounts that he once ran into a space of 750 square feet with 96 owners. Thus, a good lawyer is part of any good real estate practice in Sri Lanka.
There is also the matter of Sri Lanka’s laws concerning real estate ownership by foreigners. The laws have been changed in the past year. The prior system had an official 100% tax that many foreigners avoiding paying through legal loopholes. The new system allows foreigners to either have a 99-year lease on property (similar to Hawaii USA) and pay a 15% tax; or they can purchase property through a corporate structure and pay only 0.5% to 1% in share transfer taxes. There are a number of different corporate structures that can be used to purchase property if choosing the latter option and the decision on which one to adopt depends on the purpose of the proposed investment.
Around the island of Sri Lanka, whether at a resort, at home or out and about, sunsets are often... [+] spectacular (Photo: Sebastian Posingis)
Happy Endings
Mr. Robinson himself loves the land and people of Sri Lanka, and believes strongly in its future. “The economic potential is enormous… tea, palm sugar, everything grows here… there’s rice to the east and central part of the country.” He notes a survey by an agricultural fund 3 years ago that found a ton of produce was being lost by going bad in transit. As peacetime continues and infrastructure improves – with many ongoing roadway, airport and seaport projects – he sees many better days ahead.
As with many expatriates around the world, Ivan Robinson wound up in Sri Lanka a bit by chance. When he was 18 in 1983, he met a Sri Lankan that became a lifelong friend. Years later, in 1996, the friend urged Mr. Robinson to come visit him in Sri Lanka. From 1996 to 2002, Mr. Robinson visited Sri Lanka every year and gradually fell in love with the country. Leaving behind 23 years of life on the French Riviera, Mr. Robinson happily relocated in 2002. Quizzed on whether food, women, beaches, natural beauty, or real estate is better in France or Sri Lanka, he answers Sri Lanka every time. When he says, “I’m married here,” it captures both his relationship with his Sri Lankan wife and his relationship to the land of Sri Lanka.
Readers interested in purchasing real estate in Sri Lanka are advised to do their due diligence, meet with the Sri Lankan Board of Investment, and meet with multiple realtors in Sri Lanka, as well as their lawyers, before choosing a realtor.
|
bdd5edf7fdfc900fcdbbd7da8eb9add8 | https://www.forbes.com/sites/jonspringer/2015/03/20/water-access-improvements-will-improve-global-productivity/ | Water Access Improvements Will Improve Global Productivity | Water Access Improvements Will Improve Global Productivity
The Water For Women report released today written by a private-public partnership for the United Nations World Water Day highlights how very much global productivity can rise if the access of the poorer half the world’s population to water improves. Imagine the productivity of Christine Lagarde, Melinda Gates, Indra Nooyi and other great women if part of their daily work was walking 3.7 miles to get water for their families (the average in Asia and Africa), hand-washing their family’s clothes and dishes as part of their daily responsibility and being ill frequently because the water used for cleaning wasn’t always as clean as hoped. These are hours per day of productivity we take for granted in developed economies.
The Water For Women report begins with this clarifying point:
It’s estimated that globally [women and girls] spend 200 million hours every single day simply collecting water for themselves and their families – time that could be spent in education, working and earning, with their family, or contributing to the community.
There are other sides of the amount of time women spend on water issues for their families. Most men see this as something women must do but not as meaningful work. It is part of survival but not income earning.
Hanneke Willenborg is Unilever Vice-President Global Dishwash and leads Sunlight, a Unilever brand. As a co-author of the report she highlights that the most precious resource in the world is: time. Unilver has partnered with Oxfam in Africa and NextDrop in India to build networks of water centers and to deliver technical solutions to save women precious time by providing better access to water.
In the developed world women gained the right to vote about a century ago and respect for both household chores as work and working outside the home about a half a century ago. It shouldn’t be surprising that many developing economies need time to catch up to these views on women’s rights and productivity.
Aljira Santos, age 60, of Timor-Leste (Source: Water For Women report; Water Aid; Tom Greenwood)
Jenny Lamb of OxFam, one of the co-authors of the Water For Women report says, “It’s about making women’s invisibility visible.”
What the Water For Women report rightly posits is how women in the developing world can get the building blocks in place to attain broad rights for their gender and not only for exceptional women from their countries. The report highlights data with impact:
The amount of time women in India spend collecting water is equal to US $160 million in lost working income. The number of hours women spend collecting water in sub-Saharan Africa is equal to all the hours all people work for income in France. School enrollment for girls improves by 15% when clean water and working toilets becomes locally accessible without a long walk. Data suggests US $1 spent on water and sanitation creates a US $4 economic return.
This last point is the broader rethink needed on the productivity and profitability of investment in water and sanitation. When we create more productive people, we ultimately create people who can both do more and consume more. Corporations like Unilever are smart to get involved at the ground level of projects that ultimately will enhance consumer spending on their own products. The world will become more equitable, more peaceful and more profitable with sustained investments into water and sanitation.
Global spending on military solutions to keep peace is always high while investment in water and sanitation infrastructure that can help create a more productive and peaceful world is often too low or ignored. The worst example of this may be Pakistan which now has the world’s fastest growing nuclear arsenal while more than half the population – 93.9 million people – do not have decent access to good sanitation.
A useful metric of a country’s successful development going forward may become data on the number of plumbers per capita.
There are other technologies being brought into play as well. Anu Sridharan, CEO of NextDrop and a co-author of the report, is focused not only on delivering water to women, but letting them know when water is and is not available. NextDrop uses text message alerts to their subscribers to alert them when they can come to one of its water centers for water and when there is not water to be had at the center as well. The latter saves women from time spent on a useless journey.
Images and stories of how women can gain or lose opportunities - for education, work, family and... [+] community time - through water access (Source: Water For Women report; 1st photo Unilever Vox Pops/ Ifeanyi Iloduba; 2nd picture Water Aid/ Anna Kari; 3rd photo Water Aid/ Behailu Shiferaw; 4th photo NextDrop/ Aseem Khan)
Ms. Sridharan says:
The idea is simple. Some people have information about water, and others need it. What NextDrop does is serve as a medium, simply crowdsourcing this data and distributing it on the most ubiquitous device – your cell phone… A human powered smart grid.
Access to clean water and sanitation changes lives. There are many solution ideas at work and being piloted. It is important for people to understand the ripple effect of benefits reverberates not only to the direct benefactors but is a measurable benefit to the world. As Jane Wilbur of WaterAid, a co-author of the report points out, a lack of access to clean water and sanitation means: “wasted time and opportunity… disease and indignity.” Or, alternatively, a US $4 economic return for each US $1 invested in solving the problem. 400% is a good return on investment.
|
e5a144660674b098c9cc4099ffa8c8f6 | https://www.forbes.com/sites/jonstein/2013/01/29/2013-the-year-your-grandpa-becomes-more-tech-savvy-than-you/ | It's Stupid and Insulting to Pitch Baby Boomers As Tech Novices | It's Stupid and Insulting to Pitch Baby Boomers As Tech Novices
“It's stupid and insulting to pitch baby boomers as tech novices,” technology journalist Lary Magid declared after receiving an email from a PR rep touting a touch screen computer for parents or grandparents who want to “get on board with modern technology”.
Magid points out that he – along with Bill Gates, Steve Wozniak and the late Steve Jobs – is a baby boomer, the generation that grew up with early technology.
“Many of us used CP/M, DOS or even Unix long before Macs and PCs had graphical user interfaces. We were the ones who had to know how to use escape codes to get our printers to work and sometimes wound up building our own PCs.”
Despite popular opinion, baby boomers are not Luddites, but active users and shapers of technology. According to Forrester Research's annual benchmark tech study, baby boomers dominate the market in terms of money spent on tech:
"It's actually a myth that baby boomers aren't into technology. They represent 25% of the population, but they consume 40% [in total dollars spent] of it,"
said Patricia McDonough, senior VP-analysis at Nielsen Co.
A recent report from Pew's Internet and American Life Project revealed that older generations, particularly baby boomers, are catching up to younger generations in their use of technology.
While they weren’t necessarily the earliest adopters of the Internet, the first users to have mobile connectivity, or the first ones to sign up for social networking, baby boomer’s growth rate in adoption and use of information and communications technology is higher than – and in some cases surpassing – that of younger generations. Other studies have found that older individuals tend to be more thoughtful about their purchases, buying for a specific use as opposed to being motivated by being the first in the know.
This is my experience at Betterment, the investing startup I founded in 2008. On a recent trip to Florida speaking at an AAII event, I was witness to this trend of tech savvy boomers. Many were already using our product to invest, and all were comfortable managing their money from their iPads. They challenged my thinking around designing products specific to their needs, and all claimed fierce loyalty to brands they believe “get it right”.
Betterment customers above the age of 50 are twice as likely to hold an account in the “Best” pricing plan (0.15 percent management fee for balances above $100K). Customers in the Best plan maintain a balance 40 times that of other accounts, they are more likely to have an Individual Retirement Account with us, have more goals they’re saving for, and on average have a higher net worth and income. They tend to be urban based with careers in technology, healthcare, or education.
As general trends go there is nothing surprising here. Urban-based professional with decades of experience tend to have more assets, retirement planning, and income. It makes sense!
No, the surprise here is not in the usage of our product but in the outside assumptions around our product: Non-users who refer to us as for “beginner investors” or – my favorite – a reporter on the investing beat who liked our product so much she was going to encourage her children to open accounts. Out of touch much?
There’s that misguided assumption again, that only the young would want to invest using technology. Millennials make great customers, don’t get me wrong. These individuals were our first adopters and continue to form a large segment of our customer base.
Yet, it’s the more seasoned investors that truly understand the value of our platform – our efficient and low cost management, the behavioral guardrails that keep them investing in a down market, and our sophisticated system that reinvests their dividends and auto-rebalances their accounts in a tax efficient way.
So when it comes to investing in future customers, I’m long on the 50+ demographic.
|
5915f42500e6f5e8af6ee64542c7f9d9 | https://www.forbes.com/sites/jonstein/2013/02/05/innovation-is-not-dead-just-different/ | Innovation Is Not Dead, Just Different | Innovation Is Not Dead, Just Different
If the cover of The Economist is any indication of public sentiment – and it often is – many are speculating that innovation is dead. Two separate articles – “Has the ideas machine broken down?” and “The great innovation debate” – suggest doom and gloom. The cover “Will we ever invent anything useful again?” suggests the hope we placed on technology was misguided and that in fact, we have not produced anything as useful as a toilet in decades.
The Economist Cover: The great innovation debate
Peter Thiel, whom many would call an innovation expert (founder of PayPal, one of the earliest investors in Facebook, venture capitalist, and entrepreneur), dismissively says of this generation’s inventors: “We wanted flying cars, instead we got 140 characters.”
He may have a point about some industries. The advances in transportation speed achieved in the 1970s have not been matched since.
And yet… is our vision of progress limited to the hopes of early science fiction films? Is a flying car the answer to America’s economic problems?
The New Innovation Goal Is Efficiency
I think in general, there’s a shift toward innovation in services and improved efficiency. Efficiency comes in many forms: speed, health, cost reduction, error reduction. We should look to make better what has come before. Seek betterment, and economic growth will follow.
A picture of the Maverick flying car in development by I-TEC. (Photo credit: Wikipedia)
The Economist cover story concedes that while growth is not at the same levels of the past, we’re still in the early days of this new technological phase. Technology’s true impact may not yet be evident. After all, it was decades before the global trade benefited from steel and diesel, and developments from the late 19th century drove growth well into the 1970s.
It takes time to fully realize the potential of various innovations.
Bad Use Of Good Technology
Take financial services technology for example. The innovation in trading has been enormous. Financial institutions have gotten so good at trading; they can catch a buying trend milliseconds before it happens and profit off the inflated price. Individuals can now partake in stock picking and active trading alongside the pros. Costs are lower across all investing platforms – index fund costs are lower, liquid funds help to minimize spread of stocks, and commission and trading costs have significantly dropped. Investing in the stock market is more accessible to more people than ever before.
But as individuals, we’re no better off. High frequency trading has introduced more volatility into the market; individuals make poor choices because they are ill informed or under stress; returns suffer because people trade too much or they try to time the market.
This bad use of good technology compelled us to found Betterment, our investing service focused on helping people reach their goals as efficiently as possible. We invest all of our customers in a widely diversified portfolio; our sophisticated technology provides advice unique to each individual’s goals; and we automate smart financial behavior. Technology diversifies every single dollar that comes in, rebalances every account we have in a tax efficient way, and reinvests dividends. Technology ensures it happens, no matter what happens.
It’s no flying car, but it’ll help you save for one when it comes on the market.
|
e48abd6b72f1045b0484acd9b18c2fc3 | https://www.forbes.com/sites/jonstein/2013/04/07/why-i-choose-software-over-a-human-for-investment-advice/ | Why I Choose Software Over A Human For Investment Advice | Why I Choose Software Over A Human For Investment Advice
We have conversations with software all the time. We ask our navigation systems for driving routes, we ask Hipmunk for recommendations on the next vacation, we ask Amazon to find us the best bargain, and we ask Siri anything we like.
Software that gives us advice is nothing surprising. So why shouldn't we expect to use advice software for our investments? Sometimes I hear the opinion that human advisors would better manage my investments. And I can see why some people might prefer that interaction. But I prefer to get my advice from software over a human advisor. Full disclosure, I am the founder of online investment advice service Betterment.
Software doesn’t sleep, go on vacation, or retire. It doesn’t get scared by a volatile period or chase the latest trend. And it doesn’t have conflicts of interest. The explicitness of technology makes it more transparent and fair (for software to give conflicted advice, that conflict would have to be coded explicitly, which would be intentional, traceable, and probably illegal).
Of course, software is written by humans, who can still make typos or worse, mistakes of judgment, but it has typically been refined, tested, and proven over time. It’s resilient, well documented, and easily supervised.
Software can provide advice in real time, when you’re most likely to be reviewing your finances, rather than during business hours, or when it most suits the advisor. Software doesn’t seek a promotion or a raise. It simply continues on its coded path regardless of external events. Software can combine the advice of several minds into one. Software is also more likely to create the investment recommendations appropriate for you personally, rather than what is simply familiar to its experience.
Smart, reliable, and user-friendly technology that’s customized to the consumer is the new normal. I’m often questioned on whether ALL our financial conversations will someday be with technology: “Will investment advisors go the way of travel agents, door-to-door salesman, or paper boys?” people ask.
I don’t know. It's possible that computer-generated investment advice is only a personal preference - that some people will always prefer human advisors. And it's clear that technology doesn't cover every conceivable personal situation, not yet anyway. Tax advice and estate planning are two examples of financial advice where an advisor or planner can add real value.
But if we want to be able to offer everyone an efficient way to access great investment advice, then a computer-based model is really the only way this can be achieved. It’s efficient, scalable, and affordable. After all, an algorithm has no need to generate income to support itself and its dependents.
While some people may always want the human touch, I prefer advice and management without having to deal with a human advisor. I like the reliability, the accessibility, and to me, the best advice comes from well-built software. (And I think most good advisors would agree – they use software to make their recommendations, too.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.