id
stringlengths
32
32
url
stringlengths
31
1.58k
title
stringlengths
0
1.02k
contents
stringlengths
92
1.17M
4394d1be01b02ad83a8a05d8c1a80ee8
https://www.forbes.com/sites/justingerdes/2012/03/14/san-francisco-bay-area-clean-energy-roadmap-would-slash-emissions-push-zero-net-energy-buildings/
San Francisco Bay Area Clean Energy Roadmap Would Slash Emissions, Push Zero Net Energy Buildings
San Francisco Bay Area Clean Energy Roadmap Would Slash Emissions, Push Zero Net Energy Buildings California is rightfully lauded for its world-leading energy policy. Yet I often hear from business and political leaders that the state could do much more. A new report published by Pacific Environment presents a vision for the San Francisco Bay Area in which available energy technologies and policy tools are fully implemented. The roadmap finds that local clean energy, including 4,000 megawatts of solar PV, and a focus on zero net energy buildings could slash greenhouse gas emissions from the electricity sector by more than 60% by 2020. Bay Area Smart Energy 2020 (BASE 2020) was written by Bill Powers, a San Diego-based energy consultant. Powers is the author of a similar report, San Diego Smart Energy 2020, released in 2007, which argued that aggressive deployment of local renewable energy and combined heat and power could make superfluous the controversial Sunrise Powerlink, now under construction in eastern San Diego County. At the March 12 report launch, Powers called BASE 2020 a “distributed energy strategic plan” for the nine-county San Francisco Bay Area. “The primary focus is on energy efficiency and rooftop solar in the urban core,” he said, “with some combined heat and power, and a fair amount of emphasis on cutting down cooling loads in the summertime to minimize the drive to build more stuff to meet our energy needs.” The BASE 2020 plan is based, as Powers readily acknowledged, on existing California policy. The difference being that Powers envisions a much more aggressive adoption of clean energy and energy efficiency than do state policymakers and the major utilities. “The framework,” he said, “is the California Long Term Energy Efficiency Strategic Plan [PDF].” “I am not in the habit of complimenting the California Public Utilities Commission [CPUC] on the quality of their documents,” he quipped, “but this is an excellent document.” A new Pacific Environment report provides a pathway to slash San Francisco Bay Area electricity... [+] emissions by 60% through local clean energy and aggressive energy efficiency. Credit: Metropolitan Transportation Commission “In concept and theory, it is what we will do,” he said, “but I can guarantee you that unless everybody in the room is making a lot of noise about it all the time, it’s not what we will do.” That plan assumes, as does Powers, that zero net energy construction will become the norm in new and, in time, existing buildings. BASE 2020 calls for at least 25% of Bay Area homes and commercial buildings to be zero net energy by 2020. To reach that goal, Powers suggests four strategies, from a summary at the Pacific Environment website: 1. Solar Photovoltaics: Nearly 4,000 MW of solar energy is installed on rooftops, over parking lots and in Bay Area brownfields. 2. Energy Efficiency: Energy usage is reduced by 25 to 30 percent in Bay Area buildings and in agricultural operations. 3. Air Conditioning: Incentives will encourage upgrades of air conditioning, leading to a fifty percent reduction in energy usage. 4. Energy Storage: BASE 2020 calls for 200 MW of energy storage in the Bay Area to be located within buildings or as community energy storage projects These strategies would be complemented by new clean energy projects and three financing and policy tools, the latter, Powers writes in the executive summary, being the “primary vehicles to achieve the reduction in GHG emissions”: 1. Combined Heat and Power: BASE 2020 proposes 840 MW of new combined heat and power. 2. Geothermal: BASE 2020 recommends upgrading geothermal operations at The Geysers in Sonoma County, which would add 300 megawatts of capacity. 3. Wind: The BASE 2020 plan also includes 300 MW of new wind at the Solano wind complex. … BASE 2020 also calls for a 400 MW battery at the Solano wind complex to smooth out the intermittent wind power. Financing: There are several options for financing these projects which need to be adopted in the Bay Area. These include “clean energy payments,” where the utility pays the building owner for excess power generated [more commonly known as a feed-in tariff]; the Property Assessed Clean Energy (PACE) Program, where projects are paid for as part of a property tax assessment; and Community Choice Aggregation [CCA], which allows communities to market and sell energy to their residents independently of PG&E. BASE 2020 does not rely on technological leaps or impossible renewable energy deployment goals. Powers calls, for instance, for 200 MW of energy storage to be integrated with residential and commercial PV systems. As I reported at this blog last month, Arizona’s largest utility just launched a pilot to test that very concept. Powers envisions adding 4GW of distributed solar to the Bay Area grid by 2020; Germany has installed nearly twice as much solar (7.5 GW) in each of the last two years alone. At the same time, it’s not unreasonable to say that parts of the plan are audaciously ambitious. Take just one of the report’s core concepts: zero net energy buildings. The California Long Term Energy Efficiency Strategic Plan states that new homes and commercial buildings must be zero net energy by 2020 and 2030 respectively. BASE 2020 calls for all new Bay Area buildings to be zero net energy by 2015, and for the conversion of 25% of existing buildings to meet the same threshold by 2020. Last month, I wrote about the obstacles slowing efficiency gains in buildings already standing – where even the City of San Francisco is struggling to sell energy efficiency to its affluent, climate-conscious homeowners. The plan also assumes the revival of property assessed clean energy (PACE) programs for homeowners. That outcome appears increasingly likely, with a recent court victory and the opening of a new federal rulemaking, but is not assured. BASE 2020 also advocates for the adoption of German-style feed-in tariffs to encourage the adoption of solar. “It’s been very difficult to make traction at the PUC to get a feed-in tariff that works,” Powers said. “If, for institutional reasons, the PUC just can’t bring themselves to establish a tariff that works to put rooftop solar on homes,” he said, the State of California could administer a feed-in tariff program itself. Powers suggests that the Department of General Services could buy renewable energy at set rates and require the state’s investor-owned utilities, PG&E in the Bay Area, to purchase tranches of clean electricity. After Bill Powers presented the highlights, Pacific Environment convened a panel of energy experts to reflect on the report recommendations. Renewable Funding President Cisco DeVries, credited with coming up with the idea for PACE as chief of staff to Berkeley Mayor Tom Bates, in 2007, was guardedly optimistic. “We got our start fighting things. Saying no,” he said. “Now, we’re fighting for what we think is the yes. That is a remarkable transformation.” “The challenge here is that we are de-funding the things that need to be funded to make this happen. I don’t just mean the subsidies and incentives. We can see the end of federal subsidies. State subsidies are winding down for solar. We have an aging grid. Our job now is to be on offense because over the next few years circumstances will intervene.” “On its own trajectory, it’s probably not going to get there,” DeVries said. Yes, parts of BASE 2020 are audaciously ambitious. But we need audaciously ambitious plans, if only to shatter the conventional wisdom. Writer and futurist Alex Steffen tweeted yesterday that “if ‘everyone knows’ a solution's ‘politically impossible’ no one thinks it's credible, whatever its actual merits or realism.” I think he’s right. Roadmaps like  BASE 2020 offer political leaders a vision of the clean energy future that could be, if they choose to make it so.
97f10617463ac2f63be2518e630d1ce5
https://www.forbes.com/sites/justingerdes/2012/04/27/greater-miami-to-launch-550-million-energy-retrofit-fund-billions-in-unfunded-projects-wait-nationwide/
Greater Miami To Launch $550 Million Energy Retrofit Fund; Billions In Unfunded Projects Wait Nationwide
Greater Miami To Launch $550 Million Energy Retrofit Fund; Billions In Unfunded Projects Wait Nationwide When I last dedicated a post to property assessed clean energy (PACE) programs, in January, I described how, with the residential PACE market stalled, project developers and financiers had collaborated to establish a commercial market. On April 12, a vote by the Miami City Commission authorizing the city to join the South Florida Green Corridor District brought one of the commercial PACE collaborations much closer to fruition. The PACE Commercial Consortium, spearheaded by the Carbon War Room and backed by Lockheed Martin, Barclays Capital, and Ygrene Energy Fund, announced, in September 2011, its intent to fund $550 million worth of energy retrofits in Miami-Dade County, Florida, and $100 million more in Sacramento, California. PACE financing enables property owners to take out a loan, usually via city- or state-organized bonds, to pay for energy efficiency upgrades or onsite renewable energy. Loans are repaid, typically over 20 years, through an annual supplemental property tax assessment. In a recent interview, I spoke with Ygrene Energy Fund Chairman Dennis Hunter and President Dan Schaefer about the Green Corridor District launch, and the state of the industry. The Green Corridor District will be the “largest district in the state of Florida to provide energy efficiency financing,” Schaefer told me. Miami was the sixth city to join the district. Owing to a recent change to Florida law, the district won’t be able to fund projects until after July 1. Schaefer said the delay will be worth the wait. “Part of that law change,” he said, “makes it much more advantageous for cities to form districts such as the Green Corridor District in the state of Florida.” Member cities are now waiting for the go-ahead by the seventh and final member, the City of Coral Gables. “All of the [Green Corridor] cities have to be on board before they can start the district,” explained Schaefer. “The seventh city – the district model is seven cities under the law – will be Coral Gables.” In July, seven cities in Miami-Dade County will launch the Green Corridor District. Over the next... [+] five years, the district is projected to fund $550 million worth of energy retrofits. Energy service companies say billions of dollars of unfunded projects wait nationwide. Credit: Justin Gerdes Schaefer said the vote in Coral Gables is scheduled for early May. “If the vote is affirmative, the district will form, and then Ygrene and its partners can fund projects.” At the launch, Miami will authorize only commercial PACE projects; the other Green Corridor cities are likely to support residential as well as commercial PACE retrofits. Building the project pipeline Schaefer said Ygrene’s Florida subsidiary, Ygrene Energy Fund Florida, administrator of the Green Corridor program, is making preparations now to ensure a successful launch in July. “We’re in the final phase of our launch campaign. We are going to go into the cities to train, bring on board, the certified contractors, in order to start building the pipeline.” “We have pretty active engagement going with the Latin Builders Association,” he went on. “We already have over 100 contractors signed up. Our initial marketing outreach was to the construction community, so we have quite a large pool of contractors onboard. They’re going to go through the training programs so they can learn to use our systems, and learn how to position the financing as part of their sales program.” “We’re really excited about how quickly this will launch,” Schaefer said. Based on conversations with Green Corridor energy service company (ESCO) partners Lockheed Martin and Trane, he said, it’s clear that caulk-gun ready projects are waiting to be funded. “Lockheed Martin, the first project they are looking at is actually quite large; it’s about $10 million. Trane has something like $20 to $30 million of projects that are not currently fundable, but will be fundable under the Ygrene program.” Billions of dollars of unfunded projects waiting Dennis Hunter said Ygrene Energy Fund is working with cities and states across the country to authorize PACE programs, or to re-write existing law to establish more advantageous conditions for PACE programs. Atlanta, Georgia, will vote on a PACE program in May, Hunter said, and lawmakers in Louisiana, Connecticut, Massachusetts, Minnesota, Wisconsin, and Michigan are working on PACE enabling legislation. Hunter was especially optimistic about the potential for projects in Michigan, where Ygrene has met with representatives from the major automakers. “Car companies there are paying over $1 billion a year in energy costs. They feel they can save half of that energy cost, but the return on investment isn’t high enough to merit their capital expenditure. They want a higher return. PACE programs work really well for them because it’s off-balance sheet,” he said. “Because of our affiliation with the Carbon War Room and PACE Commercial Consortium, we’ve had quite a bit of access to and high-level conversations with a number of the large ESCOs,” said Dan Schaefer. “They’ve indicated to us there’s an aggregate pipeline – between companies like Lockheed Martin, Chevron Energy Solutions, Johnson Controls, Trane, Honeywell, and Siemens – of billions of dollars of unfunded projects.” “The demand is so out there,” agreed Dennis Hunter. “The cities and counties are having such a difficult time making the political decision to go ahead. We don’t really know what is holding them back. It’s just an enormous pipeline of demand for money out there in the market. It’s so hard to get for these middle size and small companies.” Hunter cited an ECONorthwest study (PDF), commissioned by the advocacy group PACENow, which assessed the economic impact of PACE programs. “This money operates at a two-and-a-half times multiplier. For every billion dollars extended, it creates about 15,000 jobs. It would be a very big contributor to the jobs market in the U.S.,” he said. Ygrene projects 8,250 jobs will be generated by energy retrofits in the Green Corridor District over the next five years.
fa030d3e05d6b9007f461a629629f875
https://www.forbes.com/sites/justingerdes/2012/06/18/denmark-pushes-through-first-ever-eu-energy-efficiency-law/
Denmark Pushes Through First-Ever EU Energy Efficiency Law
Denmark Pushes Through First-Ever EU Energy Efficiency Law With the clock running out on its EU presidency, Denmark achieved on June 13 one of the chief aims of its six months in control of the European agenda. Negotiators from the Danish presidency, the European Commission, and the European Parliament agreed to the EU’s first-ever energy efficiency law, a package of measures estimated to reduce the bloc’s energy consumption by 17% by 2020. The intent of the Energy Efficiency Directive was to codify a non-binding target, agreed to by European leaders in 2007, to reduce EU energy consumption by 20% by 2020. Existing policies were expected to deliver only 9% energy savings by 2020. Lobbying by EU member states weakened the European Commission and European Parliament’s draft proposals, and the negotiated agreement will likely fail to achieve the 20% target. Agreement was in doubt The Danish presidency’s lead negotiator for the Energy Efficiency Directive is Martin Lidegaard, Denmark’s Minister for Climate, Energy, and Building. After the deal was announced, Lidegaard sounded like a boxer happy to celebrate a split-decision victory after trading punches for 12 rounds. “It’s only 17% because that was possible to get. We fought like lions. We started at 13%, and now we have 17%, and that is actually something we are proud of,” he told EurActiv.com. Lidegaard presaged the difficult months of negotiations that lay ahead when I interviewed him in Copenhagen on March 29. The focus of our conversation was the recently announced Denmark Energy Agreement, which I wrote about at this blog a few days later. But I also asked about the draft EU energy efficiency law, which had already ignited vigorous debate among the EU’s 27 member states. “We had some very tough negotiations about the Danish energy agreement, but they are nothing compared to the discussions we have at the European level,” Lidegaard told me. On June 13, EU negotiators reached agreement on the Energy Efficiency Directive, a package of... [+] measures to reduce the bloc's energy consumption by 17% by 2020. The new law was a priority of the Danish EU presidency. Credit: Justin Gerdes “There seems to be agreement that we should try to reach our common target of 20% energy efficiency. But when it comes to how we should deliver, there is very big disagreement, unfortunately.” “If I should try to be a little realistic,” he went on, “as things are standing now it’s going to be very difficult to achieve the 20% target. I have to urge my colleagues to think more about all of the gains with energy efficiency, instead of all the barriers. I hope we will be able to land a fairly good directive, but I have to say that is difficult. Right now, it’s hard to get the mandate.” That was late March; by May, the negotiations were close to collapse, with energy companies and member states, notably the United Kingdom, Germany, France, the Netherlands, and Spain, accused of blocking a key provision of the Energy Efficiency Directive. On June 3, the Guardian’s Fiona Harvey reported that the UK, with the backing of the nation’s six largest energy firms, was lobbying to weaken several mandatory requirements in the energy efficiency law. What the Energy Efficiency Directive delivers Not to be lost amid the hand-wringing over the agreement reached by European policymakers on June 13 is that the deal promises to make legally binding what had been voluntary. Once the Energy Efficiency Directive becomes law – the European Parliament votes on the deal in September; a vote expected to be a formality – EU member states must collectively reduce the bloc’s energy consumption by 17% by the end of the decade. The key provision of the energy efficiency law I alluded to above is known as Article 6. In the European Commission’s original proposal, the provision would have required utilities to deliver energy savings equal to 1.5% of annual sales – similar to what is required of California’s largest utilities. The language agreed to on June 13 includes so-called “flexibility measures,” which are capped at 25% of the annual energy savings target. After accounting for these measures, utilities will be required to deliver annual energy savings of 1.125%. The flexibility measures include allowing member states to count early and future action toward their targets and an exemption for energy used by industries obligated to cut emissions under the EU Emissions Trading Scheme. Here again, David Cameron’s government was singled out for its role in weakening the law. EurActiv.com reported that at the last minute UK negotiators insisted on a provision that exempts countries that have already adopted energy savings schemes for utilities (Denmark, France, Italy, and the UK) from being bound by Article 6. Also weakened was a provision that would have required 3% of public buildings to receive energy retrofits each year. The language agreed to in the final deal applies only to “central government-owned and occupied buildings.” According to EurActiv.com, in Germany, where most public buildings are owned not by the federal government but the regions, the rule will apply to just 37 buildings. And Reuters reported that the UK added defense and military buildings to a list of exempted buildings – the opposite of the practice of their ally across the Atlantic, where the U.S. Department of Defense is among the nation’s most committed carbon cutters. The overall effect of member state lobbying was to weaken what had been a much more ambitious energy efficiency law. Nonetheless, a few compromise measures strengthened the final Energy Efficiency Directive. A mandate will require the European Commission to review the effectiveness of the directive in 2016. A measure added in the negotiations’ final days – reportedly traded for the weakened public buildings retrofit mandate – obliges EU member states to prepare roadmaps that chart a course to boost the energy efficiency of existing commercial, residential, and public buildings by 2050. The Danish story During our March interview, Martin Lidegaard expressed puzzlement over why some member states harbored such fierce objections to mandatory energy efficiency requirements – experience with such measures in Denmark taught him that his fellow Europeans had nothing to fear. “The Danish story is that we are going to introduce now into the European market something we have been doing in the Danish market for five years. When we introduced a mandatory scheme, both energy companies and industry were dead against it, just as they are at the European level,” Lidegaard told me. “The obligation is on the energy companies,” he went on, “not on industry. The energy companies have to deliver a certain amount of savings, and then they go out and compete on how cheaply they can get those savings from industry.” “That’s exactly the same system which the [European] Commission has proposed for the rest of Europe,” said Lidegaard. “The really extraordinary thing is that after four years both the energy companies and industry came to the [Danish] government and said, ‘Please double the mandate.’ Why? Because they can gain so much money from the savings. It’s a very market conformed way to do it.” “We have extremely good experience with this. And we tried to tell this to our good colleagues in Europe. Of course, our experience is very limited. I understand that,” he said. “You have different strategies in different countries. A lot of European countries have done a lot on energy efficiency, and they don’t see this system as the perfect system for their reality. I do respect that; the problem being that what we need to decide is something additional to what we are already doing.” “We need flexibility, but we don’t need flexibility on the target.” In the end, Lidegaard and his allies were forced to accept flexibility in how the energy efficiency target was to be reached and the target itself. The EU will now be required to nearly double its energy savings by 2020. My guess is that by the end of the decade the member states that blocked agreement on a stronger Energy Efficiency Directive will come to learn, just as erstwhile opponents did in Denmark, that saving energy is good business.
a90326b6924eb0a3159b5314f9acf250
https://www.forbes.com/sites/justingerdes/2012/06/30/global-trial-shows-led-street-lighting-delivers-up-to-85-energy-savings/
Global Trial Shows LED Street Lighting Delivers Up To 85% Energy Savings
Global Trial Shows LED Street Lighting Delivers Up To 85% Energy Savings Results from a global trial of light-emitting diode (LED) street lights confirm that the fixtures can deliver electricity savings of up to 85% over incumbent technologies. The two-and-a-half-year pilot, called LightSavers, tested 533 LED lamps in 15 trials in 12 cities, including New York, London, Hong Kong, Toronto, and Sydney. Findings from the trials are presented in a report co-released by The Climate Group, electronics giant Philips, and HSBC earlier this month on the sidelines of the Rio+20 summit. The Climate Group launched LightSavers in 2009, supported by the HSBC Climate Partnership, with the goal to accelerate the market adoption of outdoor LED lighting and smart-lighting controls. Key findings from the report (PDF), Lighting the Clean Revolution: The Rise of LED Street Lighting and What it Means for Cities, include: - LEDs achieve the expected 50 to 70% energy savings, and reach up to 80% savings when coupled with smart controls. [Energy savings in the trials vary from 18% to 85%, with 20 out of 27 products achieving savings of 50% or more, and ten showing savings of 70% or more.] - Surveys in Kolkata, London, Sydney, and Toronto indicated that between 68% to 90% of respondents endorsed LEDs city-wide rollout. Benefits highlighted included improved safety and visibility. - LED lighting trialed lifespan ranges from 50,000 to 100,000 hours indicating a high return on investment. - The ‘catastrophic’ failure rate of LED products over 6,000 hours is around 1%, compared, for example, up to 10% for ceramic metal halide fixtures over a similar time period. - The Climate Group and Philips are calling for an international low carbon lighting standard to be created and implemented ensuring that citizens worldwide have access to energy efficient outdoor lighting. “We conclude that LEDs are ready to be brought to scale in outdoor applications. The independent and verifi­able results from the LightSavers trials and accompanying public surveys give compelling evidence that many commercially-available, outdoor LED products offer high quality light, durability, and significant electricity savings in the range of 50 to 70%,” wrote Climate Group CEO Mark Kenber in the report’s foreword. Results from a global trial of LED street lights confirm that the fixtures can deliver electricity... [+] savings of up to 85% over incumbent technologies. Credit: City of Raleigh, North Carolina He added: “High capital cost and a dearth of effective financing approaches continue to be barriers to market maturity. But these will diminish as investment flows into companies making quality products; as LED and smart control device prices continue to fall; and as innovations spread in project financing and procurement in cities like Birmingham, Guangzhou and Los Angeles.” In California, to cite another example, support for LED street lights project financing has come from the California Energy Commission (CEC) and the U.S. Department of Energy. In January, I reported at this blog that 10 California cities, several of them quite small, had used funding provided by the American Recovery and Reinvestment Act (ARRA) to undertake LED street lighting retrofit projects. Since I published that post, the CEC has announced that about a dozen more California cities have launched LED street lighting retrofit projects courtesy of the same ARRA-funded Energy Efficiency Conservation Block Grant (EECBG) program. So confident are the report partners in the potential of LED lighting they want LEDs to become the global lighting standard. "All new public lighting – both street lighting and in public buildings – should be LED by 2015, with the aim of all public lighting being LED by 2020,” said the Climate Group’s Kenber in a statement. The authors conclude: “LED outdoor luminaires have reached maturity in terms of their performance. City lighting managers from across the world have independently verified that LEDs can live up to their promise of exceptional perfor­mance, energy efficiency, and public approval, with indicators pointing towards stabilization in light output in many products after an initial period of volatility.”
49d2920d4df7a372371f57ca6618796b
https://www.forbes.com/sites/justingerdes/2012/09/30/bucking-a-trend-california-residential-clean-energy-pace-program-thrives/
Bucking A Trend, California Residential Clean Energy PACE Program Thrives
Bucking A Trend, California Residential Clean Energy PACE Program Thrives Resistance from federal housing officials nearly scuttled the residential market for property assessed clean energy (PACE) financing, as cities and counties across the country shuttered programs. But a few jurisdictions bucked the trend. One outlier is the HERO (Home Energy Retrofit Opportunity) program, which appears to be thriving in western Riverside County, California. I have written frequently at this blog about PACE financing. My posts have focused, for instance, on the launch of commercial PACE programs in California, Connecticut, and South Florida. Such reporting is valuable, I hope, in keeping readers up to date on the nascent but fast-maturing PACE market. The HERO program, which launched in December 2011, can claim the distinction of being the rare PACE program that has successfully funded projects. Renovate America, which runs the HERO program for the Western Riverside Council of Governments (WRCOG), announced earlier this month the approval of more than $50 million for residential energy retrofits, with half of that funding approved since July. According to PACENow, an advocacy group, 2,000 homeowners have applied for HERO PACE financing; of these, 1,250 have met the program’s underwriting criteria. Some 300 residential projects worth $5 million have been completed. Two-thirds of the projects have funded energy efficiency measures; the most popular improvements: upgraded HVAC units (30%), windows and doors (24%), and insulation (6%). The balance of projects funded is largely rooftop photovoltaic systems. The HERO program for the commercial sector has been slower to complete deals. PACENow reported that a $700,000 project was slated to close in August, with projects valued at $20 million more expected to close in the coming year. WRCOG Executive Director Rick Bishop told the Riverside Press-Enterprise last month that private lenders have committed $225 million to finance commercial HERO projects. The design of PACE programs varies widely, but the basic premise is the same. A residential or commercial property owner is able to tap low-interest financing to fund energy efficiency and renewable energy upgrades without the burden of upfront costs. An audit is often undertaken to identify energy- and water-saving opportunities. The HERO program, which launched in December 2011, can claim the distinction of being the rare PACE... [+] program that has successfully funded projects. Credit: WRCOG Property owners wishing to pursue PACE financing then enter into an assessment contract, tied to the property, which releases the project funding; in the case of the HERO program, the homeowner enters into an agreement with Renovate America, which also funds the program. The contract stipulates that the property owner agrees to repay the cost of the improvements through an annual property tax assessment lasting up to 20 years. If a building is sold or transferred, the PACE lien remains tied to the property. The HERO program relies on contractors to sell PACE financing to property owners. More than 400 contractors are now registered. WRCOG expects the HERO program will create up to 4,000 jobs in the region. Reporting by the Riverside Press-Enterprise’s Debra Gruszecki revealed that contractors are seeing a boost in business and adding jobs. “It’s had an extreme impact on business,’’ Douglas McMillan, general manager of Riverside-based California Showcase Construction, told Gruszecki. “I’d say 75 percent of our work is HERO at this point; business has been so good, we’ve hired more people — seven, so far.” Another small business owner, Mike Mohr of Mohr Power Solar said: “HERO has definitely put more people to work. Our business is up 10 to 15 percent.’’ Mohr said the extra business has preserved jobs at his company that would have been lost. The HERO program is able to close residential PACE deals, despite objections by the Federal Housing Finance Agency (FHFA), because it requires property owners to meet rigorous Department of Energy lending guidelines. The property owner, for instance, must be current with property taxes and mortgage payments and have at least 10% equity in their home.
885d7660dedb0bd9947027d5e9b41024
https://www.forbes.com/sites/justingerdes/2013/01/29/small-town-big-energy-savings-retrofitting-block-by-block-in-murray-city-ohio/
Small Town, Big Energy Savings: Retrofitting Block By Block In Murray City, Ohio
Small Town, Big Energy Savings: Retrofitting Block By Block In Murray City, Ohio Murray City, Ohio might seem an unlikely place to find a trend-setting project with the potential to change an industry. But, beginning in October 2010 and for the 14 months that followed, Murray City was home to a block-by-block energy retrofit project responsible for weatherizing three-quarters of the town’s homes. I’ve written before at this blog about one of the little-known achievements of President Obama’s first term, the weatherization of 1 million homes under the stimulus. The American Recovery and Reinvestment Act injected $5 billion into the U.S. Department of Energy’s Weatherization Assistance Program, a five-fold increase over the fiscal year 2008 funding level. The intent was to revive the battered construction industry and reduce utility bills for struggling low-income homeowners. The project in Murray City, a onetime coal mining boomtown now home to less than 500 residents, took its inspiration from the nation’s energy efficiency evangelist-in-chief: Energy Secretary Steven Chu. In an op-ed published in October 2009, Chu announced a new initiative, “Retrofit Ramp Up,” whose aim was to reduce the cost of energy upgrades by focusing on whole-neighborhood building energy retrofits. “If we can energy audit and retrofit a reasonable fraction of the homes in any given residential block, the cost will be greatly reduced,” wrote Chu. “We want to make home energy efficiency upgrades irresistible and a social norm for homeowners.” “Secretary Chu talked about weatherizing the country one block at a time, one town at a time,” Ron Rees, Executive Director, Corporation for Ohio Appalachian Development (COAD), told me. “That was a lot of what was behind the effort in Murray City.” COAD, based in Athens, Ohio, coordinated the Murray City energy retrofit project with assistance from Hocking-Athens-Perry Community Action. COAD is a private non-profit consortium of 17 community action agencies that serve 30 counties in southeastern Ohio. Rees said COAD and its partners wondered what would happen if they concentrated materials and energy retrofit teams in one community for an extended period and focused on upgrading homes block by block rather than one at a time. Murray City was selected for a pilot project because of its manageable size, aging housing stock, and homeowners' economic need. COAD’s energy services team oversaw a three-step implementation process. COAD staff performed the initial home inspections, including use of blower door tests and infrared scanners to identify air leaks and insulation gaps, to determine the upgrades needed in each home. Next, COAD arranged for community action agencies or private contractors to complete energy improvements. An unprecedented block-by-block retrofit project succeeded in weatherizing nearly 75% of the homes... [+] in Murray City, Ohio. Credit: COAD Available upgrades included: air leakage and duct system sealing; attic, sidewall, floor, and foundation insulation; hot water tank insulation; low-flow shower heads; energy-efficient appliances, including refrigerators; heating system tune-ups and repairs; high-efficiency furnaces; smart thermostats; and compact fluorescent lights (CFLs). Last, COAD staff returned to homes to perform a final quality assurance inspection. Qualifying low-income homeowners received upgrades at no cost, with improvements paid for with Weatherization Assistance Program funding and rebates from the local utilities, American Electric Power and Columbia Gas. Rees noted that only two states hit their federal stimulus weatherization goals in the time allotted: Ohio and Maine. Ohio hit 120% of its goal; COAD exceeded its goal by 200%. “We actually pulled in money from other parts of the state where they weren’t able to implement the work they needed to for weatherization,” he said. In Murray City, the whole-neighborhood approach to retrofitting yielded remarkable results. Nicole Peoples, Special Projects Coordinator, COAD, said that 74% of the town’s more than 200 homes were weatherized during the 14-month project. (The others were either vacant, had already been weatherized, or the owners declined the program.) For its efforts, COAD was recognized in October with the 2012 Andromeda award from the Alliance to Save Energy. Tom Calhoun, Housing Program Manager, COAD, said he was not sure if an impact study will be done for the Murray City project, but “from our past experience, we know that we can reduce a customer’s utility use anywhere from 25% to 50%. We usually tout the figure of about 35%.” “Even with low gas prices right now, that amounts to about $400 a year in energy savings,” he said. In my next post, I dig deeper, presenting seven lessons learned for those interested in replicating the Murray City model in their town.
0ee3a5f0d5d668b5c3cd3d715726bec3
https://www.forbes.com/sites/justingerdes/2013/02/26/estonia-launches-nationwide-electric-vehicle-fast-charging-network/
Estonia Launches Nationwide Electric Vehicle Fast-Charging Network
Estonia Launches Nationwide Electric Vehicle Fast-Charging Network Scarcely noticed in the wake of the much-discussed spat between Tesla CEO Elon Musk and New York Times reporter John Broder was the opening last week in Estonia of the world’s first nationwide network of fast chargers for electric vehicles (EVs). The network of 165 DC (direct current) quick-charging stations, produced and installed by Swiss engineering giant ABB, are strategically dispersed across the country. Along highways, the stations are no more than 60 kilometers (37 miles) apart, installed at gas stations, cafes, shops, and other high-visibility spots. In towns, stations are installed at shopping centers, gas stations, post offices, banks, and parking lots. Every city of more than 5,000 inhabitants hosts at least one station; the capital, Tallinn, population 423,000, hosts 27 stations. The quick-charging stations can deliver a 90% charge to the battery in less than 30 minutes, according to KredEx, the national foundation that operates the EV network. In an interview that aired yesterday on PRI’s The World, Jarmo Tuisk, head of Estonia’s EV program, KredEx, said that the program rests on three cornerstones: 1) Install the necessary infrastructure: “Our idea was to create this whole ecosystem at once because if you just deal with one or another aspect, you cannot pull things off,” said Tuisk. The 165 quick-charging stations were installed “to give a safety net to the early adopters so nobody is left on the road.” The EVs available in Estonia can drive about 140 kilometers (87 miles) on a single  charge, depending on the model and driving conditions, comfortably within the range of the highway quick charging stations. 2) Launch a large-scale demonstration project: 500 of the 619 EVs registered in Estonia belong to a demonstration fleet deployed with government agencies across the country. The public EV fleet provides researchers with data on the real-world performance of the cars and charging infrastructure, and the conspicuous presence of the cars in a country with a population of just 1.2 million should, officials hope, jump-start the private EV market. 3) Provide purchase incentives: Individuals and companies in Estonia are eligible to receive a grant of up to 18,000 euros ($23,522) toward the purchase price of a new EV such as the Nissan Leaf or Mitsubishi i-MiEV. Tuisk said the grant covers roughly 50% of the initial cost of the car, including taxes. New EV owners can also apply for a 1,000-euro ($1,307) grant to cover the cost to install a home charging station. Estonia recently opened the world's first nationwide fast-charging network for electric vehicles.... [+] Credit: ELMO (Electromobility in Estonia) EV owners pay between 2.5 and 5 euros ($3.27 and $6.53) per charge or 30 euros ($39.20) monthly for unlimited charging. Payment is made either with an authorization card or via a mobile phone app. Jarmo Tuisk told The World’s Marco Werman that the Estonian EV charging network operates entirely on renewable electricity, 90% from wind. Tuisk calculated that the electricity generated by Estonia’s wind farms during just five minutes each day is enough to charge the nation’s entire fleet of EVs. According to Reuters, money for construction of the quick-charging station network came from a 2011 deal under which the government of Estonia sold 10 million surplus CO2 emission permits to the Mitsubishi Corporation. Terms of the deal also included the government fleet of 500 EVs. With public EV charging infrastructure deployment still in its early days, Estonia’s decision to prioritize quick-charging stations will be noted by jurisdictions around the world that have already launched charging networks or plan to do so. Prioritization of DC quick chargers for public EV charging was one of the lessons learned I highlighted last March at this blog in a profile of Portland State University’s “Electric Avenue” project. There, one of the trends observed by project director George Beard is what he called the “Big Gulp theory of charging”: EV owners will plug in at Electric Avenue for 5 or 10 minutes, long enough to grab a coffee at a nearby café and to add 30 to 50 miles of range to the battery, enough to make it home, where most EV charging is done.
7d4a7b7ae7decec2dd8fb9c0454c61e2
https://www.forbes.com/sites/justingerdes/2013/04/08/how-much-do-health-impacts-from-fossil-fuel-electricity-cost-the-u-s-economy/
How Much Do Health Impacts From Fossil Fuel Electricity Cost The U.S. Economy?
How Much Do Health Impacts From Fossil Fuel Electricity Cost The U.S. Economy? How much would electricity cost in the United States if the retail price reflected the health impacts of burning fossil fuels? A paper recently published by researchers at the Environmental Protection Agency finds that accounting for such costs would add an average of 14 to 35 cents per kilowatt-hour to the retail cost of electricity. Nationwide, these hidden health costs add up to as much as $886.5 billion annually, or 6% of GDP. The peer-reviewed study, titled “Economic Value of U.S. Fossil Fuel Electricity Health Impacts,” was published online last December in Environment International by Sarah Rizk* and Ben Machol of the Clean Energy and Climate Change Office, U.S. EPA Region 9, in San Francisco. (In an interview, Rizk and Machol noted that views expressed in the paper are theirs alone. Rizk recently left the agency to attend business school.) “There are a lot of reports out there that quantify the total health costs and the total health impact values from fossil fuel energy in the U.S.,” said Rizk, “but there are fewer of them that put it into a dollar per kilowatt-hour metric, which is what you see on your utility bill. We wanted to present it in a way that was digestible to the average consumer of electricity.” To do so, Rizk and Machol gathered data based on state electricity profiles, fuel type, and national averages for the benefits per ton of emissions. The economic value of the health impacts was based on premature mortality, workdays lost, and other direct costs to the healthcare system resulting from emissions of PM2.5, NOx, and SO2. The health impacts valuations presented in the study come from national benefit per ton figures developed from a Community Multi-scale Air Quality (CMAQ) model, which is regularly used in EPA Clean Air Act rulemaking. “We knew the methodology the EPA traditionally uses in rulemaking, and it hadn’t been applied in the way we did here,” said Machol. “Where we took a step deeper,” he explained, “is that the analyses that we used were based on studies that use photochemical modeling, which allows you to get a deeper picture of what those health impacts will be.” Rizk and Machol found that the dollar value of improved human health from avoided emissions from fossil fuel-fired power plants ranges from a low of a half penny to 1.3 cents per kilowatt-hour in California to a high of 41 cents to $1.01 per kilowatt-hour in Maryland. (When accounting for imported fossil fuel electricity, California’s figures increase to 3 cents to 7 cents per kilowatt-hour – illustrating the importance of the City of Los Angeles’ recent decision to divest from coal-fired electricity.) Rizk and Machol found a similarly wide range for the valuations for health impacts by fuel type: 19 to 45 cents per kilowatt-hour for coal, 8 to 19 cents per kilowatt-hour for oil, and 1 to 2 cents per kilowatt-hour for natural gas. “For coal and oil,” Rizk and Machol write, “these costs are larger than the typical retail price of electricity, demonstrating the magnitude of the externality.” (The average retail rate for electricity for all sectors in the United States, as of January 2013, was 9.66 cents per kilowatt-hour.) Combine the average retail cost of electricity to the health impacts from fossil fuels, said Rizk and Machol, and “on average, U.S. consumers of electricity should be willing to pay $0.24–$0.45/kWh for alternatives such as energy efficiency investments or emission-free renewable sources that avoid fossil fuel combustion.” They suggest that pricing recognition of these hidden costs could take the form of a so-called “health adder” policy “to more fully account for adverse climate and health impacts associated with fossil fuel usage.” A paper recently published by researchers at the EPA finds that accounting for the health impacts of... [+] fossil fuel combustion would add an average of 14 to 35 cents per kilowatt-hour to the retail cost of electricity. Credit: U.S. Department of Energy, Office of Science Why $886.5 billion is a likely underestimate Rizk and Machol make clear that future analyses will likely find their estimate of the economic value of health impacts from fossil fuel electricity, $361.7 to $886.5 billion annually, to be an underestimate. Their study, they note, “does not attempt to include all externalities,” nor do they “attempt to complete a full life cycle assessment of all externalities associated with fossil fuel electricity or its alternatives.” They omit impacts resulting from extraction and transportation of fossil fuels and impacts on climate change and human welfare. Their findings also do not include other pollutants resulting from fossil fuel combustion: O3 precursors, NO2, greenhouse gases, residual or hazardous waste products, and water-borne pollutants. Rizk and Machol nevertheless expressed confidence that the national estimate of economic impacts, despite the limitations, is sound. “What we have the most confidence in,” Rizk said, “is our national estimate because they’re using that national benefit per ton and emissions are taking place at the national level.” She repeated the estimate for economic impacts: between 14 and 35 cents per kilowatt-hour. “The mid range of that is more than double what people pay for electricity today. We found that quite striking.” “Our real hope in putting out this data is getting people to realize how significant the health costs are, and that they can be much more significant than some of the numbers people are putting to carbon dioxide cost valuation methods, and they are potentially higher than the retail cost depending on what your geographic scope is,” Rizk said. A bright spot amid the gloom is that health costs should fall as coal-fired power plants are taken offline. “As older units are retired, and as facility owners in eastern states make strides to address EPA’s Cross-State Air Pollution Rule [on March 29, the U.S. Solicitor General petitioned the Supreme Court to review the August 2012 D.C. Circuit Court ruling that vacated the CSAPR] , health impacts should decrease, often significantly,” write Rizk and Machol. Growing body of research on the cost of health impacts Rizk and Machol’s study joins a growing body of research dedicated to quantifying the health impacts connected to fossil fuel combustion. On March 7, the Health and Environment Alliance, a European NGO, released a report which found that emissions from Europe’s coal-fired power plants cost the continent’s citizens up to €42.8 billion ($54.9 billion) in health costs annually. The authors say the study provides the first-ever calculation of the health costs associated with air pollution from coal-fired power plants in Europe. The ledger includes costs associated with premature deaths, medical visits, hospitalizations, medication, and reduced activity, including working days lost. And on March 27, the International Monetary Fund (IMF) released a report [PDF] calling for the end of $1.9 trillion in annual global energy subsidies. The Washington Post’s Brad Plumer provides a helpful overview of the report here. The tally includes $480 billion in direct subsidies to consumers and $1.4 trillion in what the IMF calls the “mispricing” of fossil fuels. Why mispriced? Because polluters are not forced by governments to pay the full costs associated with fossil fuel combustion on climate change and public health. *Disclosure: Sarah Rizk recently completed an assignment with the EPA Region 9 Environmental Review Office, the same office where my brother works.
a8e5aff43e21e167c5f882d4dacca805
https://www.forbes.com/sites/justingerdes/2013/07/31/sacramento-unveils-nations-largest-clean-energy-pace-retrofit/
Sacramento Unveils Nation's Largest Clean Energy PACE Retrofit
Sacramento Unveils Nation's Largest Clean Energy PACE Retrofit The owner of the Metro Center Corporate Park will install energy-saving equipment worth $3.16 million without any upfront costs under a deal announced yesterday in Sacramento. The retrofit will be the nation’s largest property assessed clean energy (PACE) project closed to date, according to the project partners. Johnson Controls will design and implement the upgrades planned for the four-building campus. On the list is replacement of aging rooftop HVAC units and installation of a new building management system that will control mechanical equipment and interior and exterior lighting. The upgrades are expected to lower utility costs for the owner, Metzler Real Estate, by $140,000 annually. The project is funded through Clean Energy Sacramento, a commercial and residential PACE program operated by Ygrene Energy Fund. I covered the launch of Clean Energy Sacramento at this blog in January. The Metro Center Corporate Park deal is a welcome sign that the PACE market is moving beyond program launches to construction boots on the ground. Scarcely a week passes without news that another local government has launched, or state government has authorized, a PACE program. Last week, for instance, the City of Milwaukee announced the launch of a $100-million PACE program in partnership with California-based Clean Fund and Johnson Controls. Thirty U.S. states have passed PACE enabling legislation. I've reported at this blog on the launch of commercial PACE programs in California, Connecticut, and South Florida, among others. I have also reported here on what was, before the Metro Center Corporate Park deal, likely the largest commercial PACE project closed in the United States, a $1.6-million energy efficiency overhaul of Pier 1, in San Francisco. Under PACE financing, a residential or commercial property owner agrees to repay the cost of improvements such as a more efficient air-conditioner or rooftop solar panels through an annual property tax assessment lasting up to 20 years. If a building is sold or transferred, the PACE lien remains tied to the property. The main selling point of PACE financing and its cousin, on-bill financing, is that property owners avoid the upfront cost to install new equipment, a challenge that remains one of the biggest barriers to energy efficiency project development. Stacey Lawson , Ygrene Energy Fund’s CEO, told me in January that individual investors and regional banks have pledged $100 million over five years to fund Clean Energy Sacramento projects. “This is one of the unique aspects of our model,” she said. “We have figured out how to do the interim funding with players who really want to invest in the local community and then have a mechanism by which we can securitize in the capital markets so we can recycle that capital over and over.” Sacramento Mayor Kevin Johnson announces the nation's largest clean energy PACE project. Credit:... [+] Clean Energy Sacramento According to Ygrene Energy Fund, Clean Energy Sacramento has closed $4.2 million in projects over the past three months and has received over $10 million in pre-approved applications.
06a60c4ea5cee7e23578403c2c70da34
https://www.forbes.com/sites/justinoconnell/2020/01/29/ivan-on-techs-journey-from-0-to-200000-subscribers-in-2-years/?sh=c7389e470ae1
Ivan On Tech’s Journey From 0 To 200,000 Subscribers In 2 Years
Ivan On Tech’s Journey From 0 To 200,000 Subscribers In 2 Years Ivan Liljeqvist of Ivan on Tech has always loved technology. When he was nine years old, his mathematician mom gifted him a book on HTML. He wanted to learn video game development, and he thought the HTML book would help. “My mom gave me this book,  and on the cover of the book it said, ‘Here's how you can make websites,’ as if this book will teach you how to make websites,” said Ivan.  “I was a bit disappointed when I finished it, because I could only do some simple things in HTML.” From there, on the advice of some people he asked, Ivan went on to learn Javascript so he could make games like RuneScape, an old favorite of his. While growing up, Ivan’s days were spent at school and swimming. He sometimes trained twice per day, including the holidays. He programmed at night. He credits swimming with teaching him discipline and the competitive mindset. “In swimming, they put you with people that are basically as good as you are,” he said, recalling the time when he would travel all over Sweden for competitions. “It's like a few milliseconds between you and the competition.” He says everyone ought to pick up a sport when they are a teenager. “You learn how to handle adversity, you learn teamwork, social skills, how to train as a team, and how to compete,” he said. “You’re not hanging out in the hood doing crazy and useless things as a teenager, because there’s no time for that. You’re not spending time with friends that are doing things that are not useful.” His parents made it impossible for him to get into trouble, he recognizes. “As a teenager, the biggest danger is friends pulling you down and giving you bad habits, bad relationships. Who you hang out with decides your life.” Ivan Liljeqvist of Ivan on Tech Ivan on Tech MORE FOR YOUBitcoin And Ethereum Are Being Left In The Dust By Dogecoin As The Memecoin Price Suddenly Rockets—Is $1 Possible?Tesla Billionaire Elon Musk Spurs ‘Joke’ Bitcoin Rival Dogecoin Higher After Price Suddenly DoublesRadical New Bitcoin Price Model Reveals When Shock Bitcoin Rally Could Peak After swimming, Ivan would go home and program. He posted several video games to the app store, but none of them were successful, other than for learning.  Ivan spent time on C++, which carried over to his later studies of Bitcoin, which was programmed in C. Ivan, who now has more than 213,000 subscribers on his Ivan on Tech YouTube channel, discovered Bitcoin after a friend tipped him off to the digital currency. Bitcoin was in a bull market, having gone from $30-$1,000. “Everyone was freaking out about Bitcoin, like in 2017, when we went to $20,000,” he described. “Whenever Bitcoin goes to new all time highs, a lot of new people join the market. In 2017, we saw thousands of people discover crypto.” Ivan was one of those people, just in 2013. “It was right at the end of the 2013 cycle,” he recalls. “And then we collapsed, and entered the new bear market. I was like many people: you FOMO in at the top and then you'll learn how the market truly works.” Despite Bitcoin’s price collapse from $1,000 to $200 during that time, the technology of blockchain was fascinating to him. Ivan studied blockchain from 2014-2017. Ethereum’s 2015 release helped him see potential use cases beyond digital gold or electronic cash. Ethereum’s Solidity programming language, which is similar to JavaScript, provided a new way to build applications on a blockchain network. Ivan learned Solidity and how to build dApps. While he was working as a programmer at Ericsson in 2017, Ivan started his YouTube channel, mainly as a way to share his knowledge about programming and technology in general. A few thousand subscribers would look good on his resume, he thought. “I had the mindset that I didn’t need a huge following, maybe just a few thousand people, because then I could show [potential employers] that people listen to what I have to say, so hire me. Also, my English was so bad at that time, I thought at least I could learn English by doing YouTube.” That was the original plan, at least. So he got to work making daily YouTube videos. His philosophy was simple. “It doesn’t matter if it’s a good video, bad video, I’m going to do it everyday,” he said. “So if it’s a bad video one day, tomorrow is a new day and quickly there will be a good video, a better one.” He began making videos about general technology and coding. One day he made a video about Ethereum, however. It performed better than the other videos. People needed crypto content, he saw, and since he loved crypto, it was a good fit. “I didn’t expect crypto to have that much of an audience on YouTube,” said Ivan. “That day, when I created the Ethereum video, I didn’t know what else to create. So I thought, let me do a video on Ethereum just to get a video out today. Good thing I did that, because it completely exploded.” He had found his niche. While the videos before the Ethereum video had been getting a few hundred views, afterwards they were getting thousands. He kept doing crypto video after crypto video, and the rest is history. He credits his success to luck, skill, dedication and work ethic. His mindset of accumulating just a few thousand subscribers, not worrying about quality, and instead focusing on quantity, helped him to not get discouraged. “I wasn't too perfectionistic,” he said. “Many people get stuck because they want the perfect video, the perfect sound, the perfect camera, but I just thought, ‘Hey, let me get a good microphone and then we'll do one video per day no matter what happens.” People don’t care about shiny things, says Ivan. They don’t care about your video editing. They just want value. “Everyone is bad at YouTube in the beginning,” he had figured at the start. “So whatever I do, even if I spend time on equipment, etc., it's still going to be bad. If you do one video per day for three months, you have 100 videos. If you have not improved in those 100 videos, something has gotta be wrong with you.” Through quantity comes quality. “And through quantity, I will also have more chances to get in front of people so people can see that I'm active. They’ll see me over and over again in their suggestions.” Today, Ivan knows how to make a show that brings value to his audience. “I learned how to create an interesting visual, how to tell the topic in an interesting way, how to engage an audience.” His equipment has since improved, too, including a nice camera and a studio. “But, you don't have to start there,” said Ivan. “Absolutely not.” He didn’t have a studio until mid-2018. Until then, his setup was basic: a Blue podcasting microphone, and the camera on his computer. Although after the Ethereum video went viral, and he started taking his YT channel more seriously, he still didn’t see it as something to do full-time, for he didn’t know how to monetize it. It wasn’t until late 2017 when Ivan felt crypto is an industry in which he could build a business. By then, he had received invitations to speak, helping him to understand that blockchain was a truly growing industry. “I thought crypto was just, a few guys sitting at home and chatting on BitcoinTalk and that’s it,” he recalls. Alongside the conference circuit, he began speaking to large companies, hosting workshops, and giving keynotes. “We’ve been educating banks and government agencies how the world is changing in terms of finance.” It took awhile before he understood that this is actually something he could do full time. That’s when he started the Academy, which offers developer courses for Ethereum, EOS, and the Bitcoin Lightning Network. “That is where I felt I could really make a difference,” he said, with students signing up for classes featuring homework, assignments, private tutoring, and monthly Q and As. Students can ask questions to instructors and the greater community, if they’re stuck on a project. “The people that we educate actually go out and they build things in the industry and they push the industry forward,” said Ivan. “And you got to have education in this space, because without it, you don't have educated people that can change the industry and change the world.” These types of videos don’t perform well on YouTube, he says, because the algorithm will not suggest such a video––most people will not watch it. “Maybe the 5% of your audience that truly wants to learn how it works, because they truly want to get involved,” said Ivan. “And therefore it is better that they enter the Academy.” If people pay for the courses, Ivan reasons, they are more likely to dedicate themselves to the learning than if they were watching free YT videos. Just weeks before I spoke with Ivan, he had dealt with the biggest challenge he had ever faced. The YouTube purge of 2019 saw crypto content creator videos and channels taken off YouTube. While The Purge itself is over, things will never be the same again. “All the channels are back, everything is back to normal, but that is only on the surface in reality,” said Ivan. “Nothing is back to normal, because now everyone understands that you will have to decentralize. It's not the question about whether we should or not. It’s a question about how quickly we can do it and how, as well as what kind of platforms we should, or should not, migrate to.” The Purge, which prompted Ivan to encourage his YT subscribers to sign up on the Ivan on Tech mailing list, was also a blessing in disguise, as it forced all the influencers to get together and strategize to get their channels back. “We had a group on Telegram, where we organize ourselves, and we could get as much noise out on Twitter as possible, so that YouTube finally responded, and finally said that it's a mistake.” Ivan isn’t so sure YT would have said it had made a mistake if the creators hadn’t made so much noise. “[We had] a lot of support from other YouTubers and the community,” he said. “It was in a sense very good that we got to know each other and work with each other and have this crisis together, because people bond the most in a crisis, when you have a common problem to solve.” For now, Ivan will happily stay on YouTube, while simultaneously encouraging people to sign up for the email list as a hedge. “That is the only thing we can do really because there are no alternatives to YouTube,” he said, noting how new content hosting websites don’t help content creators get in front of new people, which is part of the advantage of the incumbents like YT. “There is no alternative to the current centralized ecosystems. People say I should use decentralized platforms, but the only thing they offer is hosting.” Everyone's consuming on YouTube and Spotify, says Ivan, which is where creators need to be in the unending pursuit of eyeballs. “For example, now we started with just putting the audio on all podcast platforms,” he explained. “We’ve got to play the game, and the game right now is that the people are still using centralized solutions, and they will for quite some time. We're not going to transition into decentralized platforms anytime soon. And for crypto to grow, we need to get into more people's minds, and we need to be exposed to more people.” Content creators need to be using the platforms where people are, therefore. “But at the same time, in case something happens, try to get the email, have your own website, have a robust communication channel with the audience from which you can’t get banned. Don’t get too comfortable.” While The Purge presented a big challenge, Ivan says telling a story and communicating the news in crypto is the small challenge he faces daily for his show Good Morning Crypto—but one he enjoys. His advice for people thinking of starting their own show? “Do it every day. That's really it. Do it every day. You will figure out the rest.”
b48e5011f33bb9d9a18c037b7519a4b0
https://www.forbes.com/sites/justinoconnell/2020/02/28/how-your-crypto-startup-can-dominate-on-tiktok/
How Your Startup Can Dominate On TikTok
How Your Startup Can Dominate On TikTok “If Instagram Stories and Vine had a baby and then infused it with music, that’s TikTok.” — Ryan Fiore, Vice President of Marketing at MANSCAPED, Inc. The Wolf of Bitcoins has 37,700 followers on Instagram and 54.8K followers on TikTok. His highest grossing TikTok post has reached 2.3 million views. He started the accounts for fun. They swiftly grew into crypto-powerhouses on social media. He first had success in 2017 and 2018 on Instagram, when that platform had different algorithms, before users began seeing diminished reach, and, perhaps, creating an opening for an alternative to gain users. TikTok now has reached 1.5 billion downloads. “My girlfriend set up an account for our dog, Giuseppe, and she was showing me TikTok,” said Wolf. “I hadn't really heard much about it.” It seemed cool. “I started studying how the algorithm works, which really sparked my interest,” said Wolf. “So, I downloaded the app and started looking around to see if there was anything crypto or Bitcoin-related on TikTok.” There wasn't really anything there, and so he saw an opportunity. MORE FOR YOU‘Moon Very Soon’—Cryptic Elon Musk Spurs Bitcoin On As Price Suddenly Blasts Past $60,000 And Ethereum Hits Fresh HighBitcoin Price Prediction: Why Bitcoin Could Rocket To $400,000 In 2021Billionaire Mark Cuban Reveals Why He Thinks Ethereum Will ‘Dwarf’ Bitcoin As Crypto Market Price Hits $2 Trillion The TikTok algorithm uses machine learning to evaluate the quality of the videos uploaded. The specifics aren’t public, but experts have reverse engineered the platform. TikTok shows a new video to a small number of TikTok users in between popular videos. The algorithm determines how much of the video is watched, including how many likes, comments, shares, and downloads it receives. The ratio is 1 like for every 10 views before the video is shown to more people. If a video receives 20% more likes one day, TikTok then shows the video to more people. Users have said their views seemingly come in waves, therefore. Good equipment helps, too. “If you have an iPhone 6, you need a newer iPhone, because that actually makes a difference to the algorithm,” said Wolf. The video should be in 4K and HD. James Brooks, founder of London-based Team Brooks, offered further insights into the newest social media craze. “TikTok craves raw, authentic content, so I'd recommend shooting fast-paced content directly in the app using your phone,” said Brooks. There are a lot of trends and general in-jokes that you can take part in which will really help you to establish yourself within the culture of TikTok.” The key to being in the know about these trends is to consume content on the platform, says Brooks, who has 58,300 TikTok users. “I advise people to spend 15-20 minutes per day consuming content to help them come up with ideas. If you can put a unique spin on a trending sound, you're already onto a winner.” He also advises people to keep an eye on which ‘sounds’ and hashtags are trending. “This is a great way to get involved with current trends and to make your TikTok videos more discoverable.” Both Brooks and Wolf suggest keeping your TikTok videos short. Brooks says between 8 and 12 seconds. Wolf suggests between 15 and 30 seconds. “Attention spans are low and watch time is an extremely important metric when it comes to the TikTok algorithm,” said Brooks. “If you can add something unexpected or funny to your video that will make people watch it more than once, it'll likely do very well.” His most viral TikTok received. 3.2 million so far. Wolf adds: “Not saying a minute long video can’t go viral, but the chances of you reaching a larger audience are higher if your videos are short.” The experts agree: short, sweet, and informative videos perform best on TikTok. After posting an educational video about crypto, Wolf might post some footage of one of his mining facilities. TikTok content has to be relevant. “It has to be real, creative, and original,” he said. “And it has to be quite frequent, as well. Look for trends and go with that. Lastly, a lot of people on there are just reposting videos from other websites or things like that. That doesn't really work if you want to go viral on TikTok. The algorithm is interesting. When someone makes a post on TikTok, it gets shown to about 500 people.” The reaction of those 500 people is critical—likes, comments, shares, rewatches. “[TikTok] works on a point system,” he said. “Rewatches get the most points, then after that it is shares. So people that share your content, every time someone shares it, you get more points. And then, after that, comments and likes. So, likes actually get the least amount of points. And the more points you get, the next time it pushes you out, the bigger the audience it pushes you to.” For Wolf, good content is the type of content people share with their friends or leads them to head on over to the creator’s profile and look at their other videos. “You really need to be creative, you need to be interesting, you've got to be original,” he reiterates. “And if you can do that, and if you feed the algorithm good content, you will dominate on TikTok.” When he started using the platform, Wolf wanted to see how the younger generation—users are between the ages of 16-25—would react to Bitcoin. “Let’s see if there’s any interest in crypto from them,” he thought. When creating the content, he keeps the younger audience in mind. Wolf often posts content about mining and GPUs. “Especially GPU mining, as a lot of the younger generations, they play games, so they understand what GPUs are,” said Wolf. “They might not understand exactly what cryptocurrency is, but the content I was putting out was sparking a lot of interest.” Regarding his viral video of a room full of GPU miners mining Ethereum. The caption placed on the video reads: “This is the reason GPU prices went up.” How did it all happen?  “I chose the right video and the right caption on there to spark interest and controversy, too.” Wolf posts a lot of educational videos on TikTok, as well. He recently did a video explaining why you shouldn't keep cryptocurrency on exchanges, and why you should take control of your private keys, and what you should use.” He uses a Ledger in the 25-second video. “Basically, just showing new users what to do and what not to do,” explained Wolf. “I'll post a few videos on there, educating people on cryptocurrency and certain things in the industry, which I find gets a lot of good engagement. There's definitely interest there from the younger generations. Not all of them quite understand fully what this is. But they're definitely keen to learn and they're definitely interested in the short videos I’m posting about it.” The Wolf of Bitcoins Wolf of Bitcoins TikTok stands out from Instagram and Facebook today, because people’s parents aren’t on it...yet. “When people's parents started Instagram accounts, the younger generation [was pushed] off,” said Wolf. “Same thing happened with Facebook. Everyone's Mom and Dad now has a Facebook account, and you don't really see the youngsters on Facebook anymore.” In addition, many businesses are currently moving away from Facebook and Instagram, because posts don’t reach as many people as they used to. “TikTok allows [for] a larger audience,” said Wolf. “As long as your content is good, it pushes it to a lot of people. And, on top of that, on Facebook and Instagram, it's easy to fake it. You've got these accounts with loads of followers, and it's not real. There's a lot of people faking it on Instagram and Facebook.” It's harder to do that on TikTok, says Wolf. “In order to get a large audience and to gain traction on your posts, you really do need to be creative and do need to put out content that is relevant and content that is interesting. It's a lot easier to fake it on Instagram or Facebook,” said Wolf. “I see TikTok as a more authentic platform.” He adds: “Over time, you're going to see the age range grow, as people in their thirties and forties and higher are already starting to use TikTok.” Brooks has seen accounts earn 10 Million views on their first video. “TikTok is not something that is about to pop, it's already happened,” said Brooks. “The question is simply, who is going to take advantage now whilst the potential reach is so huge, and who is going to sit on the sidelines and wait until it's even more mainstream, but even harder to gain traction?”
ed73b107ed0ecc7da8ac887376346ad1
https://www.forbes.com/sites/justinshubow/2015/08/04/a-first-look-at-the-wwi-memorial-competition-the-best-entries-are-all-classical/
A First Look At The WWI Memorial Competition: The Best Entries Are All Classical
A First Look At The WWI Memorial Competition: The Best Entries Are All Classical Are we still capable of building dignified, beautiful memorials? The just-released entries in the initial stage of the National World War One Memorial competition prove that the answer is “yes.” The memorial is the project of the congressionally created World War I Centennial Commission, which is completely privately funded. (You can submit a donation here.) Operating on a tight budget, it seeks a modest design—a good thing in this age of memorial bloat. The memorial will be built in Washington, D.C. in what is now the dilapidated Pershing Park. The 1.8-acre parcel is situated where Pennsylvania Avenue meets 14th St. NW, a key site on the axis from the Capitol to the White House. One challenge for entrants is that they must figure out what to do with the existing large statue of General Pershing. To its great credit, the commission decided to hold a blindly reviewed design competition open to anyone 18 years and older. This is just like the competition for the Vietnam Veterans Memorial, which was won by a then-unknown Yale undergraduate, Maya Lin. The WWI Commission’s wise decision has been rewarded with 350 entries. In choosing to host an open competition, the commission said it wished to avoid the errors of the closed Eisenhower Memorial competition, which took place in 2008-9. It permitted only licensed architects to enter, and gave only those with substantial experience a winning shot. As a result, it received only 44 entries. It also made the mistake of being a portfolio-based competition: it was a competition of designers, not designs. This undemocratic process resulted in the grandiose postmodern design by Frank Gehry, which is widely hated. (In their recent budget, U.S. house appropriators zeroed all funds for the $150-million proposal and called for a complete “reset.”) More important than the number of entries the WWI competition received is the outstanding quality of some of the submissions. The best entries have something crucial in common: they are all classical. (Distinguished architect and urbanist John Massengale agrees.) If you don’t believe me, browse all of the entries yourself, which are publicly available on the competition website. Since the site is difficult to use, I created a single 273-meg PDF of all of the entries you can download here. The first advantage the classical designs have is that they fit the context of the surrounding traditional precinct: the Willard Hotel to the north, Federal Triangle to the south, and the Treasury Building and gardens to the west. (Freedom Plaza to the east is almost literally a blank slate.) They also comport with the classical plan for the city that Pierre L’Enfant designed at the request of the Founding Fathers. And the entries harmonize with the best of our commemorative tradition: the Lincoln Memorial, the Jefferson Memorial, and the Washington Monument. The second advantage the designs have is that, by being classical, they are informed by the principles, techniques, and standards that have developed from antiquity through the Renaissance to today—a tradition that Thomas Jefferson noted has received the “approbation of thousands of years.” One of the great achievements of civilization, classical art and architecture is a living, evolving practice that, like the tradition of classical music, is based upon imitation, emulation, and invention. Furthermore, the competition entries incorporate symbolism and iconography legible to the ordinary person. Neither arcane nor esoteric, they do not need a sign or park ranger to be understood. The entries are also clear in their meaning; they are statements, not question marks. Made of enduring noble materials, the proposals evince permanence and solidity, not ephemerality and fragmentation. They are works of repose, not anxiety, of transcendence, not the void. The designs are timeless: they work today, and they will work in 100 years. By contrast, Modernist memorials too often date as badly as bellbottoms and leisure suits. Having concluded that preamble, allow me to highlight some of the best entries in the competition. Note that these designs are simply first drafts to be developed later in further rounds. There is only so much one can say on a 30”-by-40” board. The main feature of Remembrance and Honor is a stone tower pierced by a long slit and topped with an empty sarcophagus ringed with columns culminating in a “crown of honor.” The tower faces the rising sun on Armistice Day. Crisscrossed by bars resembling I-beams, the stunning crown appears to be simultaneously one of thorns and rising eternal flames. Life rises over death. It’s an unorthodox, inventive design; I’ve never seen anything like it. Brothers in Arms centers on a flat-topped rotunda with an open-air circle in the roof. The focal point is a moving sculpture of two kneeling, embracing soldiers. They are survivors. One looks upward with determination as he comforts the other, whose head is bent downward in suffering. These men fight for each other as much as the general cause. The pair appears to float on a shallow reflecting pool. Instead of wearing the recognizable Brodie helmet, the uniformed men are bareheaded, which serves to emphasize the man as much as the soldier. Although the architecture of the design is perhaps inspired by the mausoleum of Augustus, Brothers in Arms doesn’t have the feel of a somber tomb. The interior will provide a dramatic, meditative experience of light and sound: the interplay of sunbeam and shadow, silence and echoing water. Park of Remembrance provides a powerful procession for visitors, who begin by descending into a space, suggestive of trenches, surrounded by a garden of blood-red roses and (seasonal) poppies, the flower that became emblematic of the war. From there the procession leads upward to a raised circular platform mediated by two allegorical sculptures: Grief and Gratitude. The former depicts an idealized muscular statue of a cloth-draped man with his right hand clutched over his chest. In his left hand, poppies dangle from his fingers above an overturned Brodie helmet. Gratitude is a similar man, except his hand covers his heart, and his other fingers grasp an olive branch. At the center of the platform is an altar, a widely understood symbol of sacrifice for a higher good. The altar is towered over by a cenotaph clearly inspired by Sir Edwin Lutyens’ renowned 1919 Cenotaph in London. Created to commemorate British losses in the Great War, Lutyens’ stolid design was so successful with the public that it came to be a model for WWI Memorials around the word. Its success continues to this day; the memorial is used a site of remembrance for the fallen of World War II and subsequent wars. The entrance to American World War I Memorial is a gate with three Roman arches beneath an unadorned pediment. The overall concept is a simple, inviting, quiet park surrounded by a pergola (a shaded walkway) that tells the story of American involvement in the war. The memorial, which also includes statues of doughboys, a golden eagle, and a modern rendition of winged victory, provides a large open space perfectly suited for public events. Modernists are sure to respond to these classical designs by claiming they are “not of our time.” They made the same argument when John Russell Pope proposed, in the 1930s, his Jefferson Memorial modeled on a Roman pantheon. The architectural establishment, by then having been captured by the ideology of Modernism, slammed the design, which they said was entirely inappropriate for the “modern” world. The Columbia University School of Architecture attacked it for being “a lamentable misfit in time and place.” (It was Franklin Delano Roosevelt who ensured the design was built over these objections.) To state the obvious, the Jefferson Memorial is now one of the most beloved pieces of civic art and architecture in the country. Indeed, it is one of the main symbols of the United States. The WWI Commission seeks public comment on entries by August 12, which you can provide at the bottom left here. The commission will announce three to five Stage II finalists sometime thereafter. In order to enter a comment, you’ll need to know the exact file name for the entry. For reference, here they are for the aforementioned designs: 0132-Remembrance_and_Honor 0243-WW_1_Brothers_in_Arms_Park_and_Monument 0296-WWI_Memorial_Proposal (Park of Remembrance) 0019-World_War_1_Memorial_Washington_DC (American World War I Memorial) The memorial is now in the jury’s hands. Let us hope they will demonstrate the foresight and open-mindedness of President Roosevelt, and pave the way for an unforgettable design for an all too forgotten war.
6faa02553e694908e266eb08650f71a8
https://www.forbes.com/sites/justinwarren/2015/07/06/the-container-revolution-much-ado-about-nothing/
The Container Revolution: Much Ado About Nothing?
The Container Revolution: Much Ado About Nothing? The Eternal Sunshine of the Spotless Data Centre Containers are, broadly speaking, a codified way of ensuring that your IT people never get too attached to their servers. The idea is to use lots of identical containers built off the one master template (the image) so that if any one of them dies (or is killed) you don’t miss it much, and it’s easy to replace with another identical container built from the master image. They're smaller and lighter than virtual machines, but their most important feature is: They don’t remember anything. You can alter a container’s state at runtime if you want to – changing configuration or setting up some local data of some kind – but when the container is destroyed all the data is destroyed with it. This is an important point: it’s when, not if, because the defining characteristic of a container is that it’s relatively short lived, and is destroyed utterly on a regular basis, to be replaced by a shiny new version, un-tainted by any memories of its former self. That’s actually great compared to other methods of doing software development. If you’ve ever endured the lengthy enterprise processes of the Software Development Life Cycle (Dev/Test/QA, then Change Advisory Board, then scheduling, then GoLive and then scrambling to fix all the issues) you’ll appreciate the pain I refer to. This lack of state enforces discipline on the people responsible for writing your business software. If everything is built fresh each time, there’s no advantage to quickly changing the configuration settings “just this once” in order to get the build running, or to fix a production problem. That fix will be quickly lost when the container is rebuilt. You can avoid having your Test environment set up differently to QA or Production, and avoid the cries of “Well, it worked in Test” as the reason Internet banking is down today. Instead, everyone is forced up update the master configuration (dare we call it a CMDB?) with the Way Things Should Be, and then that image is deployed into all of the environment. Any container that starts to develop personality – a few quirks, some endearing foibles, some hint of a personality that might appear in a heart-warming children’s movie – is quickly dispatched in workmanlike fashion, replaced with a grey clone of the original master image from the cold, heartless archive. Neat. Clean. Organised. This enforced discipline makes the entire process of software development and deployment much more streamlined. More mechanised. Building systems becomes much more like a production line than the hand-crafted servers of old. The thing is, that’s not where all the value to your organisation lives. Much of it lives in the data your organisation has. All the things that don’t get thrown away from day to day, and in fact cannot be thrown away. Sales orders. Bank balances. Ownership records. The whole capitalist system relies on us being able to remember where the money is, and who owes what to whom. That can’t be stored in a stateless container. Docker, to use the most famous example of containers, currently has a solution to this problem that it calls storage only containers, but there’s a problem. These containers aren’t really containers, they’re just a sticker on a file-system with the word Container painted on it in big friendly letters. That file-system lives on the Docker host, a server – physical or virtual – like those we’re used to. Keeping that data live and available to the containers requires some sort of non-container solution. A solution that uses... well, what exactly? Shipping containers sit stacked on a pier at Kwai Tsing Container Port in Hong Kong, China.... [+] (Photographer: Jerome Favre/Bloomberg) Could it be that all our existing knowledge and tools designed around keeping data safe and available can’t be thrown out overnight? After all, we developed backups, and replication, and RAID, and clusters, and Disaster Recovery, and Business Continuity Plans (and a host of other solutions besides) for very good reasons. Those reasons haven’t suddenly gone away because some cloud company did something nerds think is cool. Right now, there don’t appear to be container specific solutions, but expect some to start cropping up. There’s good money to be made selling shovels in a gold rush, so expect to see some startups offering containerised versions of things we’ve seen before in hypervisor and physical server land. For my money, while containers do provide something that is genuinely useful, they’re far from the full story and the tools you’re used to using so far will be around for quite a while yet.
aab5c9bc01a3a76c8df2f18f2761205a
https://www.forbes.com/sites/justinwarren/2015/11/04/nutanix-hedge-their-bets-with-lenovo-deal/
Nutanix Hedge Their Bets With Lenovo Deal
Nutanix Hedge Their Bets With Lenovo Deal Nutanix, the hard-charging hyper-converged infrastructure company, has announced a strategic partnership with PC and server maker Lenovo. The partnership involves Lenovo and Nutanix jointly developing a new family of Lenovo appliances that will use Nutanix software, which will be sold by a dedicated global sales team, Lenovo channel partners worldwide, and Lenovo's enterprise sales team. The two companies have also signaled intentions to make "substantial investments in platform engineering and development, as well as aggressive go-to-market initiatives" according to the press release I received. In a statement by Lenovo chairman and CEO, Yang Yuanqing, the company also appears to be making a not-so-subtle dig at server makers Dell and HP, saying "Lenovo can bring a new perspective to the global enterprise space. We do not have to protect old ways of thinking or entrenched ideas. Instead, we can build our business on innovation, and partner freely with the most innovative, leading companies in this space to create new solutions." Bold words indeed. This is a smart move for both companies. Lenovo will now have a strongly credible hyper-converged offering with tight ties into its hardware. Existing partnerships with Maxta, StorMagic, and SimpliVity will come under strain as Nutanix is clearly the preferred partner from this point on. This plugs a hole in Lenovo's portfolio, though StorMagic is still well positioned for the ROBO market, and Maxta may work as a software offering bundled into a hardware deal, SimpliVity may struggle to get the attention of the Lenovo sales teams. Nutanix is the really big winner here. They get access to another global route to market, similar to the relationship with Dell. Lenovo gives them an alternative should they start to receive less favourable treatment from Dell once Dell starts to absorb EMC offerings, particularly VSAN. It's also a major route to emerging markets, particularly China, which is great for a company that needs lots of growth in a run to an IPO. Dheeraj Pandey and his team are demonstrating good strategic nous and an ability to partner well.
d22ed904dacf85dfc944abd6a769c8e7
https://www.forbes.com/sites/justinwarren/2015/11/29/is-cloud-the-new-outsourcing/
Is Cloud The New Outsourcing?
Is Cloud The New Outsourcing? Is moving to the cloud the new IT outsourcing? And are we making all the same mistakes? Jim Fowler, CIO of GE Capital, said in 2014 that "We've gone too far from an outsourcing perspective." GM announced in 2012 that they would reverse their outsourcing decision and change from 90% outsourced, to 90% insourced. A new wave of insourcing was coming, we were told. And yet today we see companies announcing a wholesale movement of IT systems into the cloud. Australian real-estate listing site Domain is doing it. The Guardian recently announced they are moving everything to AWS after failing to stand up their own private cloud solution based on OpenStack. Are we repeating the mistakes of the outsourcing movement, by swapping in the word cloud? Maybe. It all hinges on the decision process behind outsourcing, or moving to cloud. At the core of a decision to outsource is an understanding that there’s little value to the organisation of performing a function itself. The classic example is of office cleaning; virtually no organisation has their own full-time office cleaning staff. Instead the work is performed by people from another firm who contract for the work. This is classic division-of-labour, comparative advantage stuff. The decision to outsource IT follows the same logic: why do this ourselves when someone else can do the same job, only cheaper, and possibly better? Joey D’Antoni, Principal Consultant at Denny Cherry and Associates Consulting, told me that “in my experience, the organisations that are best suited to outsourcing have rigid, defined processes. That’s very uncommon, especially in places that are trying to cut costs.” An organisation that signs up for a five year contract based on poorly understood processes ends up discovering the limits to their understanding, and at high cost. Everything not covered by the scope of the outsourcing contract is now a change request, billed at a significantly higher rate. Purported cost savings evaporate as getting the same service as before ends up costing more than was anticipated, because the detailed understanding of what was already happening wasn’t there. Julian Wood, a Consulting Solutions Architect, says that “people often outsource the stuff they don’t actually understand the value of. The value they then get isn’t what they wanted.” And this is the second sad result of a poorly managed outsourcing. IT departments who aren’t good at articulating their value will sometimes hide costs elsewhere. A switch upgrade is tacked onto a project because the approvals for the switch upgrade itself were denied. Extra storage is purchased, and sits idle, because project capacity forecasting is woeful, but projects hate being delayed by provisioning times. Business as usual work is buried by fudging WBS codes, so that arbitrary utilisation rates are maintained but unfunded work still gets done. IT is their own worst enemy here by keeping the lights on, but getting none of the credit (or cash) for doing so. Thus not only do the costs increase when the work is done by someone else, but all the hidden value suddenly evaporates, and the organisation pays more for less. A classic failed outsourcing. But look at the hidden assumption here: that outsourcing this function doesn’t provide any competitive value to the organisation. For office cleaning, that’s probably true, but for information processing in a world we’re frequently told hinges on better decisions based on access to information? If information is really that important to your business, why are you giving those processes to someone else? If every business is a digital business, as Accenture has said, then why stop doing something you should be grabbing on to with both hands? Will we see a swathe of companies deciding, as GE and GM have, to bring things back in-house after years of giving their competitive advantage to someone else?
988a13a9935ed835bde2100cf2d89691
https://www.forbes.com/sites/justinwarren/2016/01/20/jfrog-raises-50-million-to-provide-the-app-store-for-the-internet-of-things/
JFrog Raises $50 Million To Provide The App Store For The Internet Of Things
JFrog Raises $50 Million To Provide The App Store For The Internet Of Things DevOps software maker JFrog has closed a series C round of $50 million, which it is claiming is one of the largest investments in a DevOps focused company. The round sees new investors Scale Venture Partners, Sapphire Ventures, Battery Ventures, Vintage Investment Partners and Qumra Capital tip in cash to fund the growth of the company. With previous rounds raising approximately $12 million, things brings JFrog's total capital raised to $62 million. Co-founder and CEO Shlomi Ben Haim told me that JFrog plans to double their employee count. While Ben Haim wouldn't disclose the valuation for this round, he did say that it was nearly four times the previous round. "We want to use the momentum to double our team next year, and maybe even look at acquisitions to enrich and empower our portfolio," said Ben Haim. Shlomi Ben Haim, JFrog Co-Founder and CEO (Source: JFrog) JFrog was founded in 2008, and boasts an impressive list of over 1500 paying customers, including well known companies like Amazon, Google , MasterCard , Netflix and Tesla. These companies use JFrog's Artifactory software as a package repository, like PyPI, CPAN, or an RPM or .deb repository. When developers build their software, the binary package is added to the Artifactory for other systems to access. Unlike language or environment specific tools, like Docker Registry or Java's Maven, Artifactory supports all of these languages with the one tool. JFrog also makes Bintray, which provides a distribution service for published code. Other developers, or machines themselves, can grab published software from Bintray. The full pipeline integration from code development, through artifact storage and versioning, to publishing the code will make it much easier for companies doing IoT software to get their code out to the devices they maintain. JFrog's products integrate with development tools used by modern developers, like Jenkins, Hudson and git, to help automate the process of getting code built, tested, and deployed to other systems that need the packages, and doing it as quickly as possible. Just as consumers can easily install updated apps automatically on their smartphones, JFrog see a future where devices automatically update themselves with new software as developers commit changes to their build systems. "Having Bintray as the IoT store is something that we are very excited about," said Ben Haim. "We want to take automation beyond engineering, beyond the tools you use in-house, we want to take it all the way to consumers."
a004583e98a20d193b3767bcaa57ba17
https://www.forbes.com/sites/justinwarren/2017/10/19/commvault-says-game-on-with-new-hyperscale-appliance/
Commvault Says Game On With New Hyperscale Appliance
Commvault Says Game On With New Hyperscale Appliance Commvault has partnered with RedHat to build a scale-out hyper-converged appliance running Commvault's software, sold as a subscription. Called HyperScale, it's a clear validation of the Rubrik and Cohesity approach to backups and secondary data. Both Rubrik and Cohesity have enjoyed strong growth with their approach to secondary data management, and Rubrik is boasting of valuations of greater than a billion dollars. That kind of market action demands attention. The thing is, Commvault does have a good story to tell here, one that can work for its existing customers and to help it steal share from its major competitors like Dell EMC and HPE. It's not a mere copy, but rather Commvault's way of providing what customers clearly like about the Rubrik/Cohesity offering. "Scale-up architectures are outdated," said Commvault's Senior Director of Worldwide Solutions, Don Foster. "Customers need something that can handle the size and scale of secondary data in the modern enterprise, with all the data services Commvault can provide." Don Foster, Sr. Director of Worldwide Solutions, Commvault Commvault Commvault uses RedHat's Gluster file-system to do the scale-out storage, and uses a 4+2 erasure coding approach for data resilience. Commvault is also selling the product on a pure subscription basis, priced based on usable target storage. If you purchase the Commvault appliance hardware, you get it refreshed for free every three years, but Commvault are also providing reference architectures for hardware vendors commonly found in the enterprise: Dell EMC, HPE, Lenovo, Super Micro Computer, Cisco, Fujitsu and Huawei. This is smart, given Commvault's enterprise customer base. They often have whole-of-enterprise agreements with vendors for server and storage hardware, and this lets them use their existing relationships to acquire hardware they know and like (and have the tools and processes established to manage). This is a low-change option, which makes adoption easier. Just layer the Commvault software on top. Commvault also supports end systems that startups choose not to, like Unix. Maybe the new folks will decide the effort is worth it, maybe they won't, but for an enterprise with substantial investments to protect, it's easier to choose an extension to a known and trusted brand than to deal with sprawling point solutions. Heterogeneity and brownfields is the norm, and dealing with that complexity is one reason these deals are so large. There's also the not insubstantial investment in tools and processes that already exist. If you can get at least some of the benefits of the new approach without a wholesale re-tooling of your entire organisation, that's an attractive and lower risk option. The added value doesn't need to be as high as for a completely new approach. Change is not risk free, and how many companies do you know that handle large-scale organisational change well? You can argue that this is a reaction by Commvault, but really, it is doing just what large enterprises should do. They don't have to invent all the new things themselves, because that leads to Not Invented Here syndrome. Equally, they can't ignore all new developments in the marketplace, because that leads to stagnation and death. If it's what their customers want, then they need to be providing it. There's no shame in being a fast follower, and there are plenty of examples of it being an extremely profitable approach. The clear winner here is customers, and I say that without any sarcasm (a shock, I know). This is an example of customers winning when a new, fresh approach shakes up a staid and complacent market. There is clear value from the Rubrik/Cohesity approach, and the secondary data market had been more-or-less stagnant since Data Domain was acquired. Since then, hyper-converged technology has matured, storage densities have increased markedly, and flash has become vastly more affordable. Combine that with investments in engineering to make very complex systems easy to use, and you have a lovely confluence of technologies that make a new approach commercially viable. What we see now is wonderful, because the timing is rather great. Rubrik/Cohesity have enough momentum to make them serious players, and Commvault has entered this market at quite a good time, I think. It's been proven as a market that's worth being in, but it's growing fast enough that the competition isn't too intense. That gives everyone time to make a bunch of money while their offerings improve, meaning we as customers end up with better products, rather than simply ones that are optimised to take as big a piece of a fixed pie as possible. There are incentives for all of the players to grow the pie as well as their own relative share. Now look for Dell EMC, HPE, Veritas, and all the other heritage enterprise backup and recovery vendors to pile in with similar offerings. The fear of missing out on what is now very clearly a real thing means will ensure that. Expect to see some minimum viable reworkings of other vendors' existing products as a knee-jerk reaction from those who didn't pick up on this market early enough. We should also see one or two quite well engineered offerings from those organisations who have reasonable long-term marketing and strategy capabilities. We'll see some entrants who come in too late, and are forced to exit after lackluster sales, and possible one truly great piece of engineering that will have sadly missed the wave, which will get acquired by one of the middle strugglers as an attempt to reinvigorate its mediocre offering. The wonderful tech will then sink deep into the belly of the acquiring beast where it will die from lack of attention, to the chagrin of technology lovers everywhere. The startups have had time to establish a foothold, but now we're seeing competition really star As Foster told me: "Game on."
20fd6bdbe97fb6571eb9ac0bf8cf0bbd
https://www.forbes.com/sites/justinwarren/2019/10/14/commvault-launches-new-saas-offering-called-metallic/?sh=f663d41381b7
Commvault Launches New SaaS Offering Called Metallic
Commvault Launches New SaaS Offering Called Metallic Commvault has announced a new Software-as-a-Service offering called Metallic in its latest move in the hot market of enterprise data protection. Aimed at customers with between 500 and 2500 staff, Metallic provides a pure consumption model alternative to Commvault's existing product line. It's also not tied to any specific cloud, providing a more pure-play Software-as-a-Service offering along the lines of Salesforce.com. Commvault CEO Sanjay Mirchandani Danny Sanchez, Commvault "We'll support AWS and Azure out of the box, with more [clouds] coming," said Sanjay Mirchandani, CEO of Commvault. "And, more importantly, if customers want to use their own cloud because they have a cloud contract, go for it." "And if they don't want to, we'll give them a completely transparent capability," he added. "We're not tied to any one cloud service." Providing customers with additional choice is at the core of Commvault's approach. This should broaden Commvault's appeal to customers who would otherwise have questioned Commvault's relevance to their situation. "We offer choice where it matters in the SaaS product, like for the storage, but it's completely guided in ten clicks," said Robert Kaloustian, Senior Vice President and General Manager of Metallic. MORE FOR YOUThis AI Is A Leading Indicator Of The Future Of WorkOurCrowd Is Most Active VC In Record-Breaking Israeli Tech Fundraising Robert Kaloustian, Senior Vice President and General Manager of Metallic. Commvault "I was a CIO for many years, and one of the things that irked me was when technology providers told me how to use technology, as opposed to telling me what the capabilities were and I would mold it into environment," Mirchandani said. With Metallic, Commvault is looking to appeal to customers who have been abandoning traditionally complex and difficult-to-use on-site backup products for newer appliance and SaaS products that promise a better user experience. It may also go some way to address the perception that its traditional product is too difficult to use when compared to some of the options available from new startups. The competitive landscape has changed dramatically in the past few years. Rubrik and Cohesity have been winning plenty of new business with their simple Google-like interface and easy-to-install appliance form-factor, and Cohesity in particular has been aggressively expanding into active-secondary data. Carbonite, Arcserve, and Acronis are all doing well in the mid-tier, and Veeam just passed a billion dollars in revenue—not mere valuation—despite a not-as-successful-as-hoped expansion up-market into enterprise. SaaS only startups Druva and new entrant Clumio are providing yet more options for those who want data protection to be easier to buy and use. In some ways Metallic could be seen as validating the market opportunity of SaaS-only backup, which will no doubt gratify Commvault's competitors (and their investors) in this space, but it will also provide additional competition, making market share harder and more expensive to come by. With huge raises now the norm, it's going to be that much harder for startup investors to get the return on their investment they may have been hoping for. This seems to be the beginning of a major refresh for Commvault after some years of struggling to find its way in the world. Mirchandani has been with the company for less than a year as CEO, and already has two major moves under his belt. Just over a month ago, Commvault announced its acquisition of unified storage vendor Hedvig, in what I saw as a clear counter to the ambitions of Cohesity, and to a lesser extent Rubrik. By adding a SaaS offering, Commvault is partly trying to prevent customers leaving for a competitor because they want something that Commvault couldn't provide them. It will also be hoping it can convince customers that this new offer will suit them today, and that the rich heritage and experience of Commvault means it will be suitable for whatever the customer's future needs will be. "We're extremely disciplined around this product now and will continue to be, but this does offer us opportunities in the future because we plug into so many things," said Kaloustian. Much will depend on the actual customer experience with the new product as it rolls out into the market. Early stumbles could derail the refresh before it has a chance to really get going, but the early signs are of a revitalised Commvault eager to bring the fight to its various competitors. We've yet to see similar moves from the other large incumbent backup providers. As always, it is customers that will ultimately benefit from these moves and counter-moves. Healthy competition spurs everyone in the market to improve their products and provide better value to customers. In that spirit, I welcome a resurgent Commvault and hope to see more like this in the near future. Hear my full interview with Sanjay Mirchandani and Robert Kaloustian.
d6ef21bdcf43913e1d44b769b69ce5bb
https://www.forbes.com/sites/justinwarren/2019/11/26/github-seeks-security-dominance-with-developers/
GitHub Seeks Security Dominance With Developers
GitHub Seeks Security Dominance With Developers GitHub has decided to make a play for being a one-stop-shop for all things code security with a series of announcements made at its annual GitHub Universe conference. GitHub has mapped what it believes is a generally useful workflow for how various people involved in security—developers, security researchers, supply-chain partners, vulnerability database providers, etc.—work together to write and maintain secure code. It has then built features and tools, and in some cases acquiring companies, to match these user needs with platform functions all centred on GitHub. Particular attention has been paid to the open source ecosystem that underpins so much of modern software. Rather than taking a point-by-point approach, GitHub is taking a more holistic view, where substantial individual features or tools nonetheless fit into a broader story about how security is managed. "It's an ecosystem level issue that spans from open source developer to open source maintainer to security researcher to developer in a company who's using open source to the security team at a company," said GitHub CEO Nat Friedman. "There's very few players in the ecosystem that can pull all that together and solve that problem." Nat Friedman, CEO, GitHub Supplied To bring all these different players together on its products, GitHub has announced a raft of security-focused offerings. Security Lab is GitHub's collaborative centrepiece. It aims to provide a meeting ground for security researchers, software maintainers, and partner companies, and has a focus on open source software. Since so much open source software is hosted on GitHub these days, it's a logical thing for GitHub to do. MORE FOR YOUThis AI Is A Leading Indicator Of The Future Of WorkOurCrowd Is Most Active VC In Record-Breaking Israeli Tech Fundraising CodeQL, obtained from its acquisition of Semmle in September 2019, is being provided free-of-charge to open source developers and academic researchers. The goal is to build up a library of CodeQL queries that can detect security flaws in an automated fashion, and GitHub has created financial incentives under a bug bounty program with two main payout classes: individual bugs and broader, cross-ecosystem bug types. GitHub has added a way to work on bug fixes in private (so as not to tip off bad actors too early) with its Security Advisories, which includes a streamlined workflow for applying for a Common Vulnerability and Exposures (CVE) number. Security alerts for known vulnerabilities are also now generally available to repos that want them, and it includes automated pull-requests to fix bugs (where that's possible) using Dependabot, another acquisition GitHub made earlier this year. The process of triaging security issues and merging pull requests has also been made easier with a new mobile app for both iOS and Android that is designed to provide a similar experience to that of GitHub Desktop or in a browser, but optimised for the mobile form-factor. While few people will write code on their phones or tablets, many people involved in the software development can manage tasks, comment on code, and operate the main branch-and-merge process without needing a heavyweight client. GitHub has spent considerable effort making the experience feel similar no matter which access method you use. There are a lot of different components here, some bigger than others, but taken together they're designed to work as a cohesive whole. Small changes, such as how notifications are presented and CVE requests, support the bigger functions such as CodeQL and vulnerability disclosure. Each component works in harmony with the others so that the whole becomes more than the sum of its parts. "We're acutely aware that securing software needs researchers working with maintainers and then working with developers as well as the end users, all together," said Grey Baker, Director Product Management, Security at GitHub. "We're trying to create at least the tooling to make that possible." Grey Baker, Director Product Management, Security, GitHub Supplied The overall consistency of experience reduces the friction of working in a particular way, to the point that it becomes almost unconscious. The tools tend to disappear while you concentrate on the work and the objectives, and this is clearly GitHub's intent. While GitHub likes to stress that it wants to provide developers with choice, it clearly wants to make the integrated experience so compelling that developers don't choose to work anywhere but on GitHub. Extending this experience to the information security domain brings in a wealth of new customers whose participation is sorely needed if we are to have more secure software. It would be unwise to paint all this as some kind of altruistic plan on GitHub's part, ensuring a healthy and flourishing ecosystem serves GitHub's ends well. If GitHub becomes the de facto standard way to manage security maintenance of open source code, those processes will also become the de facto standard way to manage the security of proprietary code, just as feature branches and pull requests have become the main known-good way to write code. GitHub, so far, is setting the standard by which other methods will be measured. This is ultimately good for customers, provided that GitHub's competitors are able to provide a similarly compelling vision that isn't completely incompatible and forces developers to disappear behind the walls of one walled garden or another. In speaking with other security vendors and researchers since the conference, all were generally supportive of GitHub's moves here. While they were keen to explain that other offerings (particularly their own) did more, or were more powerful in various ways, no one thought GitHub's actions here were making things worse. In the sometimes fractious world of information security, this is an encouraging sign that perhaps collaboratively fixing system security is an idea whose time has come.
8a82c8999982a977a404b465ee590150
https://www.forbes.com/sites/justinwarren/2021/02/03/placementcom-rapidly-responds-to-change-with-renders-cloud-infrastructure/
Placement.com Rapidly Responds To Change With Render’s Cloud Infrastructure
Placement.com Rapidly Responds To Change With Render’s Cloud Infrastructure Job search company Placement recently shifted its focus as a result of Covid-19, and cloud infrastructure provider Render made the shift easy. Placement is a recruitment company focused on the people who are looking for jobs, not on the employers or recruiters that are the focus of most recruitment companies. "Our whole goal is to be 100% candidate aligned," said Sean Linehan, co-founder and CEO of Placement. Sean Linehan, co-founder and CEO of Placement Supplied "When we started the company, we were originally helping people relocate for better job opportunities," said Linehan. But as the Covid-19 pandemic started to take hold, the changing market conditions prompted a rethink of Placement's priorities. "When Covid hit, two things happened that were very interesting," said Linehan. "Firstly, people were moving a lot more than they were prior to Covid. But secondly, those moves were really disconnected from a job change. The question about where should you physically be wasn't bundled up together with the job search question. It would up being a question on its own." "We took stock of what we were good at, and what we cared about, and we decided to go basically all in on the job search side of things." Changing the focus of a high-growth startup in the middle of a pandemic meant acting quickly. Big changes to application functionality often have impacts on the underlying infrastructure, and more than a few founders have gotten a surprise when their new plans cause headaches for their DevOps or SRE teams. Modern cloud environments full of microservices are complex beasts and major changes to the business direction can have substantial impacts. MORE FOR YOUIn The Ripple Case, The SEC Is Now On Trial – And Knows ItFinary Raises $3.2M To Build The Gen-Z Group Chat Investing PlatformYC-Backed Popl Raises $2.2 Million To Create The Digital Business Card "Throughout the business model transformation there was no concern about what had to happen, technically, on the infrastructure side. I didn't have to engage with a DevOps team to make sure that we could handle the new load. I knew that Render could make it easy for me to just scale that up," Linehan said. Cloud services are a regular part of most tech startups' toolkit, but the complexity of managing cloud systems is often overlooked. While cloud makes spinning up and shutting down services much easier than provisioning the equivalent physical infrastructure, operating cloud services well still takes skill and experience. "I have tonnes of experience with the managerial and attention overhead that comes from using the big cloud providers directly," said Linehan. "In a past life, I built internal tools for all these types of DevOps and infrastructure tasks but, you know, it was always a distraction from the core product and business." That distraction indicates an opportunity to build higher level abstractions to take care of the details. Just as we replaced writing software in assembler with higher level languages, so too has cloud provided an abstraction over physical infrastructure, at least in part. Services like Render are a logical evolution of computing abstractions that allow us to concentrate on what matters to humans, not what matters to the computers. "We're walking our way up to the next level of tech stack abstraction and saying, 'Look, what if most companies don't have to worry about those details? What if that was actually abstracted away from you as well and you can just worry about the things that matter to you?'" said Linehan. "We do use AWS directly for some things where we need, like, super-insane control. But in most cases, we don't need super-insane control. We just want rational, standard, good defaults. We don't want somebody who needs to be an EC2 consultant to be able to tell us what we should be running," he said. The broad ecosystem of AWS consultants and integrators is a credit to the success of AWS, but it also indicates a gap in capability that is being filled in with human workarounds. "With Render, I got all the nice things that I wanted and I didn't have to waste any time building them," Linehan said.
e7128d73fb937b432aeaaab646028ead
https://www.forbes.com/sites/jvchamary/2014/08/13/ebola/
Ebola Outbreaks Visualized In 5 Charts
Ebola Outbreaks Visualized In 5 Charts The Ebola virus is one of the most virulent microbes known to man, killing up to 90% of those infected. According to the World Health Organization, there have been 25 disease outbreaks in recorded history. The current epidemic in West Africa is the largest to date and has now killed over 1000 people. NOTE: The charts below use data up to 9 August. Although there are now bigger differences in magnitude between the ongoing 2014 Ebola epidemic in West Africa and previous outbreaks, the relative differences will be the same. Read: 4000 Deaths And Counting: The Ebola Epidemic In 4 Charts Is the 2014 epidemic the deadliest outbreak ever? WHO's chronology of previous Ebola virus disease outbreaks is a long table and most graphics pack too much information onto a map or graph, so I've created some simple charts to share. To compare outbreaks, I took WHO's table and added numbers for the ongoing epidemic (until 9 August 2014), did some calculations then pushed figures through an app to vizualize the data. The infographics below have few labels and are designed to speak for themselves, but I've also highlighted some key stats and background facts. 1. Virulence Per Outbreak EBOLA VIRUS DISEASE: VIRULENCE PER OUTBREAK. Rectangles are outbreaks, labelled with year. Size is... [+] number of disease cases. Color is virulence, the case fatality rate from 0% yellow to red 100%. (Data: WHO / CC BY: JV Chamary / Source: http://onforb.es/Y3YjoG) Ebola has a mean virulence of 61%, killing a total 2603 out of 4235 people since records began. Virulence is the 'case fatality rate', the percentage of cases that lead to death. On this chart, color represents virulence from yellow (0% fatality) to red (100%). The 2014 epidemic (outbreak 25) has killed more people than any other outbreak in history. But it's not the most fatal: although it has the highest death toll and accounts for 44% of recorded cases, it's 'only' killed 55% of those infected (1013/1848) and is orange on this chart. With a 90% case fatality rate (128/143), the 2003 epidemic in Congo (outbreak 14) is the most virulent to date, and is red on this chart. Whether something is 'deadly' is usually defined by the likelihood of dying from it, not the total number it kills. By this definition, the 2003 Ebola epidemic is the deadliest outbreak. The least virulent outbreak occurred in Uganda in 2007. It had 25% fatality (37/149) and is yellow on this chart. (The most and least virulent outbreaks excludes three cases of 100% fatality that each killed one person and an outbreak where the only infected person survived, the red and yellow lines in the top-left corner.) Ebola was first identified in 1976, a year that saw two simultaneous outbreaks. One occurred in Sudan and killed 53% (151/284). The other is the 3rd largest and has the 2nd highest virulence at 88% (280/318), and is red on this chart. The disease is named after the site of this outbreak, the Ebola River in the Democratic Republic of Congo. 2. Virulence Per Country EBOLA VIRUS DISEASE: VIRULENCE PER COUNTRY. Circles are countries. Size is number of disease cases.... [+] Color is virulence, the case fatality rate from 0% yellow to red 100%. 'Guinea' represents West Africa. (Data: WHO / CC BY: JV Chamary / Source: http://onforb.es/Y3YjoG) The 2014 epidemic is labelled 'Guinea' because that's the country where the outbreak started. Spreading across West Africa, it has so far hit Guinea, Liberia, Nigeria and Sierra Leone. It's the only time Ebola has invaded those four countries, so the figures for 'Guinea' equal those for 2014, a total of 1848 cases with 55% virulence. Congo (the Republic of Congo) has suffered the highest virulence at 85%, and is red on this chart. The country where Ebola virus disease was discovered, the Democratic Republic of Congo, has the 2nd highest number of cases and 2nd highest virulence. It's labelled 'DR Congo' and was known as Zaire between 1971 and 1997. (Highest virulence excludes an isolated case from 1996 that killed a nurse in South Africa.) 3. Virulence Per Species Four species of Ebolavirus cause disease in humans: Bundibugyo virus (BDBV), Sudan virus (SUDV), Taï Forest virus (TAFV) and Ebola virus (EBOV). EBOV is the most virulent species, killing 2111 people out of 3236 cases, which means two-thirds of infections lead to death. It was previously known as 'Zaire virus' after the country where the disease was identified (now DR Congo) and is the species after which the others are named, what biologists call a 'type species'. SUDV is the next most deadly virus, with 54% fatality (426/792) and has only killed people in Sudan and Uganda. BDBV leads to death in a third of infections (66/206) and has appeared in DR Congo and Uganda. TAFV has not killed anyone so far and was behind a single case in Ivory Coast. EBOLA VIRUS DISEASE: VIRULENCE PER SPECIES. Circles are virus species. Size is number of disease... [+] cases. Color is virulence, the case fatality rate from 0% yellow to red 100%. Species: BDBV, Bundibugyo virus; EBOV, Ebola virus; SUDV, Sudan virus; TAFV, Taï Forest virus. (Data: WHO / CC BY: JV Chamary / Source: http://onforb.es/Y3YjoG) 4. Outbreak Deaths By Species EBOLA VIRUS DISEASE: OUTBREAK DEATHS BY SPECIES. Stream size is number of deaths per outbreak.... [+] Species: BDBV, Bundibugyo virus; EBOV, Ebola virus; SUDV, Sudan virus; TAFV, Taï Forest virus. (Data: WHO / CC BY: JV Chamary / Source: http://onforb.es/Y3YjoG) Ebola virus (EBOV) has caused the most disease, 15 out of 25 outbreaks across history. It's behind the 2014 epidemic (outbreak 25) and the first outbreak from 1976. Sudan virus (SUDV) has caused seven outbreaks and has also been active since Ebola was discovered. Bundibugyo virus (BUDV) first appeared in 2007 and has caused two outbreaks. Taï Forest virus (TAFV) was detected in one individual from Ivory Coast in 1994. The size of the streams in this chart reflects the number of deaths per outbreak, but weighting the streams by number of cases or virulence produces exactly the same pattern. 5. Annual Deaths By Country EBOLA VIRUS DISEASE: ANNUAL DEATHS BY COUNTRY. Stream size is number of deaths per year. 'Guinea'... [+] represents West Africa. (Data: WHO / CC BY: JV Chamary / Source: http://onforb.es/Y3YjoG) The Democratic Republic of Congo (DR Congo) has suffered from Ebola ever since the disease was first identified there in 1976, whereas Congo only recorded outbreaks in the early 2000s. One person died in South Africa in 1996 and one survived in Ivory Coast in 1994. Ebola started killing people in Uganda in 2000, but it's been a decade since someone died in Gabon or Sudan. The countries affected by the 2014 epidemic, labelled 'Guinea' (currently Guinea, Liberia, Nigeria and Sierra Leone) have never before been struck by Ebola. Deaths over time reflects a country's vulnerability to disease. The spread of any disease, epidemiology of Ebola included, is shaped by human behaviour as socioeconomic pressures can drive people to come into regular contact with animals carrying the cause of a disease. Famines might cause people to eat more bushmeat containing viruses, for example, while urbanisation forces the habitats of humans and Ebola-carrying animals to overlap. So why has the current outbreak claimed so many lives? The 2014 epidemic is actually below the average virulence for all other outbreaks in recorded history (55% versus 67% case fatality rate), which suggests the high death toll isn't being driven by the emergence of a new, deadlier strain of Ebola virus (though it might be a more infectious strain). Parasites adapt and evolve to infect their hosts, but I suspect this isn't the reason why the Ebola virus has killed so many people in 2014. The most likely explanation for the large number of Ebola victims is that West Africa was already at a high risk of spreading the disease, and all it took was one unfortunately incident to start its spread. Exactly why some countries have been more vulnerable to the worst ever Ebola outbreak remains to be seen.
67169886991e71e328b84ae10667e750
https://www.forbes.com/sites/jvchamary/2015/11/27/hungry-bacteria/
Mind-Control Bacteria Stop You Eating Too Much
Mind-Control Bacteria Stop You Eating Too Much Rat brain cells (green) activated by E. coli proteins (Image: J Breton, N Lucas and D Schapman) What determines whether you feel hungry or full? Appetite is mainly dictated by hormones made in the gut that circulate via the blood before being detected by the brain, whose neural circuits control when you feel satiated. Those hormones are released by the digestive system's cells, but researchers have now found that this process is also influenced by our gut bacteria – suggesting that microbes help control how much food we eat. This finding stems from coincidence: a team led by Sergueï Fetissov of Rouen University, France, noticed that E. coli would produce certain proteins after they'd been growing for 20 minutes, which matches the amount of time it takes for someone to start feeling full after a meal. When the French researchers injected small doses of the bacterial proteins into mice or rats, the animals would eat less (regardless of whether they were hungry or well-fed). Those proteins would also cause rodent cells to release 'peptide YY' – an appetite-suppressing hormone – and 'glucagon-like peptide-1', which stimulates the body to release insulin. Published in the journal Cell Metabolism, the study also showed that an E. coli protein called 'ClpB' would activate brain cells that reduce appetite. The E. coli come from the community of so-called 'good bacteria' that live in a mutually beneficial relationship inside their human hosts. And according to one explanation, mind-control benefits mutualistic bacteria because it helps keep the gut ecosystem relatively stable by preventing a constant influx of food.
1d6d8013bac051929e3a2dee501f5472
https://www.forbes.com/sites/jvchamary/2015/11/28/tardigrade-genome/
Indestructible 'Water Bears' Have Really Weird Genomes
Indestructible 'Water Bears' Have Really Weird Genomes Hypsibius dujardini (original image CC BY-NC-SA 2.0: Bob Goldstein and Vicky Madden /... [+] https://flic.kr/p/5kZxbg) Tardigrades look like chubby, mutated bears with four pairs of legs. Nicknamed 'water bears', the microscopic animals (less than 0.5mm long) are among the world's toughest creatures: dried-up tardigrades can remain in suspended animation for years then come back to life after 20 minutes in water, and they're resistant to harsh temperatures and pressures – even outer space. Biologists now believe the ability to survive extreme environments is related to the animal's weird genome. A team led by Bob Goldstein of the University of North Carolina at Chapel Hill read the DNA sequence of the tardigrade Hypsibius dujardini and discovered that it has an unusually large amount of foreign genetic material. In most animals, less than 1% of a genome comes from foreign DNA. In H. dujardini, about 1-in-6 (about 6000) genes originate from other organisms, mainly bacteria. Normally, almost all DNA will be inherited from an individual's parents, through 'vertical' transfer from one generation to the next. But genetic material from one organism can sometimes be assimilated into the genome of another. This process – 'horizontal gene transfer' – can help spread antibiotic resistance among bacteria, for example. But while swapping genes is common in microbes, it's relatively rare in multicellular organisms, probably because foreign DNA can only be transferred from parent to offspring if it enters reproductive cells (sperm or egg). Why do water bears have so much foreign DNA? One potential explanation is that it's down to how they survive dessication. Goldstein and colleagues believe that under dry conditions, a tardigrade's genome gets broken into pieces and, because cell membranes temporarily become more leaky during rehydration, large molecules (like DNA) are then allowed inside a cell. The foreign DNA is mixed with the native genome, adding to (or replacing) genes. It remains to be seen whether horizontal gene transfer really is responsible for why tardigrades are nearly indestructible.
bbd3a3f17196c0ac797ad2d924253a3b
https://www.forbes.com/sites/jvchamary/2020/06/28/airborne-coronavirus/?sh=6ab2488836ff
Coronavirus Is Airborne, But That Doesn’t Mean You’re Always At Risk
Coronavirus Is Airborne, But That Doesn’t Mean You’re Always At Risk Runner wearing face mask. Getty Viruses need liquid to survive and spread. Once outside the body, they must remain in a watery solution such as snot or saliva so they don't dry-out and eventually disintegrate. Every time you cough, sneeze, talk or even breathe, your fluids are expelled as droplets, which stay in the air for a certain amount of time and can settle on surfaces before they evaporate. Experts disagree over whether or not the SARS-CoV-2 coronavirus is airborne. The argument is central to health policy because if droplets can only travel two metres through the air before falling to the ground, for instance, it would help support the six-foot rule for physical distancing. As the size of a droplet influences how far it might fly, scientists also argue over the distinction between an 'aerosol' — a fine mist of liquid particles in air — and a 'droplet'. Some researchers think expiratory particles should be so tiny that anything wider than 0.005mm (five microns) is a droplet, while others believe 0.01mm is a better cut-off. The figure is arbitrary and distinction is irrelevant to the average person, but it's relevant to public health authorities: if a virus is carried via aerosols, you could conclude that it's also 'airborne'. Authorities like the World Health Organization and US Centers for Disease Control often send mixed messages because they want to be scientifically accurate while also giving unambiguous advice. That isn't always possible though, as science can involve fuzzy concepts and the general public just wants a clear answer to a question like 'Is Coronavirus airborne, Yes or No?' Meanwhile, news outlets sit on the fence with headlines like "Coronavirus Isn't Airborne - But It's Definitely Borne By Air." As a trained biologist and science communicator, I'm going to give you a clear statement: Yes, Coronavirus is airborne. That comes with an important caveat, which is that whether a virus remains in the air long enough to worry about will depend on the environment — your surroundings and people it contains. MORE FOR YOUSeaspiracy: A Call To Action Or A Vehicle Of Misinformation?NASA Teases A Mars Base Made Of Mushrooms, A Swarm Of Spacecraft To Venus And A Giant Dish On The MoonWhy You Should Doubt ‘New Physics’ From The Latest Muon g-2 Results So how can you tell if an airborne virus poses a considerable risk to your health? One way is through some back-of-the-envelope calculations. Given that Coronavirus is airborne, we can estimate your risk of catching it (and possibly developing disease) for various scenarios. Although several factors play a part in how long a virus can hang around in the air, the two key ones are: Boundary of the surroundings (B) Activity of people in the area (A) Both factors can be broken down further (surface area within your surroundings is influenced by furniture, for instance), but our aim is to keep things really simple, which is why we'll score boundary (B) and activity (A) on a scale from 0 to 3. An open space effectively has zero boundaries (B=0), a large park or secluded beach is slightly enclosed (B=1), a supermarket or office building is more confined (B=2) and a crowded train carriage or hotel elevator is a closed box (B=3). The frequency of people passing through that area over time — its activity — will be proportional to how many individuals may be carrying COVID-19 (including potential asymptomatic cases), which in turn determines the amount of virus in the air and on contaminated surfaces. We'll score activity on the same scale, with no people (A=0), a quiet area (A=1), normal activity (A=2) or busy area (A=3). R = A x B We can now calculate the relative risk (R) for visiting an area using the above formula. Multiplying scores for the boundary of your surroundings (B) by the activity within it (A), we get a total for its risk, which has a maximum value of 9 (a score of 10 would be reserved for extremely risky situations, such as those faced by health workers who are exposed to viruses on a daily basis). I've compiled a table with a few scenarios below. The relative risks of catching an airborne virus. CC BY 4.0 JV Chamary Performing a risk calculation is a practical approach to making objective decisions on behavior, like whether you should wear a face mask outside. Unlike governments, which often shirk responsibility and expect citizens to use subjective 'common sense' (whatever that means), the risk score doesn't lie and is less open to interpretation. If you think a scenario with R greater than 6 is too high, you might decide not take the risk. When I go running through green spaces with my dog, for instance, it's not really necessary to wear a mask because R = 0 (1 x 0) most of the time — we only need to be careful when approaching another jogger to avoid crossing paths. On the other hand, I'll put on a mask before entering my local convenience store because it has high footfall and the aisles are narrow (3 x 2 = 6). While public health authorities like the WHO and CDC give ambiguous and contradictory guidelines, scientists continue to test the extent to which Coronavirus is airborne: studies of hospital rooms containing patients with COVID-19 have found traces of virus in air samples, albeit at very low levels. Such research usually involves detecting genetic material, which is unreliable because the virus may be missing the outer shield and spike proteins that enable it to invade human cells. A shipwreck doesn't count as a seaworthy vessel, and viral genes aren't infectious on their own. Coronavirus can't infect you without bodily fluids. Regardless of whether it’s carried in aerosols or droplets, both will spend time in the air. Full coverage and live updates on the Coronavirus
6151a0b20a015c759ba21aab05a7f293
https://www.forbes.com/sites/jvchamary/2020/07/30/coronavirus-face-touching/?sh=75cd664a375f
You’ll Be Surprised How Often You Actually Touch Your Face
You’ll Be Surprised How Often You Actually Touch Your Face Woman rubbing her eyes at work while wearing a mask. getty Coronavirus has made people more aware of personal hygiene, but there are still many unhygienic things you probably don't know you're doing. One is just how often you touch your face — an activity that happens far more frequently than you might think. Normally you won't consciously notice when you rub your eyes, pick your nose or bite your nails. It's only after someone points it out that you'll realize it's happening. Those bad habits have implications for public health because good hand hygiene is vital to stopping the spread of infectious diseases like Covid-19. Being aware of 'contact transmission' is particularly important during the pandemic because some people are infectious yet asymptomatic and transfer the SARS-CoV-2 virus from their skin to widely-used surfaces such as door handles. And if you touch a contaminated surface then parts of the body where germs enter, you might infect yourself — a process called 'self-inoculation'. Until a few years ago, there was little data on environmental contamination from everyday activities and potential self-inoculation for microbes behind respiratory infections, only an occasional study on rhinoviruses that cause the common cold (a group that includes relatively harmless coronaviruses). One well-known study on the frequency of face touching was carried out in 2015 by researchers at the University of New South Wales in Sydney, who analyzed video recordings of medical students during lectures. Despite having completed an infection control course that covered proper hygiene and precautions against transmission, the research showed that students frequently touched their faces, highlighting the fact that even highly-educated, future doctors are only human. MORE FOR YOUGoogle Earth's New Timelapse Feature Lets You See How Our Planet Has Changed In Four DecadesApril’s Pink Moon Is Also A Super Moon: How To Catch ItAsk Ethan: Were Mars And Venus Ever Living Planets? The Australian study counted a total of 2346 touches over 4 hours among 26 students. Its results suggest that, on average, people touch their face around 23 times per hour. The researchers also kept a tally of whether students touched the 'mucosal area' (eyes, nose, mouth) and other places around the face (such as ears, chin, cheeks, forehead). Touching the mucosal area is especially relevant to respiratory viruses like SARS-CoV-2 because that region is the main entry point for an invading virus particle. In the Australian observations, 44% of touches involved the mucosal area while 56% were to the other facial regions. Such studies can be criticized for only tracking a small number of people in a subset of the population (26 medical students). What about the general public? According to a 2020 systematic review of the scientific literature by epidemiologists at the University of Auckland in New Zealand, face touching actually occurs at least twice as frequently as that often-cited figure of 23 times an hour. The reviewers started with almost 9000 research papers and ruled-out the vast majority based on measures like bias and quality of experiments, which left 10 observational studies that featured systematic analysis of face-touching among a variety of people in various environments, including healthcare workers, office employees and visitors to a petting zoo. Of the final 10 studies, six took place in the US, one in Australia (the study described above) and the rest in the UK and Japan, with only minor differences between countries. For example, British people touched the mouth and eyes more often than the Japanese, who would more frequently touch their eyes and nose. Average proportion of face touches within the T-zone. Rahman et al (2020) Annals of Global Health Instead of dividing a face into the 'mucosal area' versus other regions, the reviewers compared touches to the facial 'T-zone' — eyes, nose, mouth, chin — against other parts of the head — which includes the neck, hair and artificial extensions (like earrings or glasses). The reviewers used the T-zone because resting your chin on an open hand or fist (famously illustrated by Rodin's Thinker) is a common behavior in primates. It also makes sense because the chin is close to the mouth and nose: men will often move their hand up the face when stroking their beard, for instance, whereas moving hands all the way across your head from an ear to the mucosal area is relatively rare. The 2020 review found that, on average, people touch the T-zone almost 69 times per hour, and touch other parts of the head 50 times. This result has an obvious application in informing the public about the dangers of avoidable face touching to help stop the spread of Covid-19. As the reviewers concluded, "awareness of T-zone touch could significantly reduce the infection rate." Prompting people to break habits is hard, however, especially when they aren't consciously aware of them. It requires clear public health guidelines and maybe employing psychological tricks to encourage them to change their ways. Meanwhile, some physical interventions can offer an added bonus. While masks are primarily designed to prevent viral transmission via airborne droplets, they provide a secondary benefit by blocking direct contact with your hands, offering another way to maintain good hygiene. Full coverage and live updates on the Coronavirus
09c07379fef83248e61b3a70b64e153c
https://www.forbes.com/sites/jvchamary/2020/09/30/if-you-play-sports-dont-worry-about-coronavirus-on-your-equipment/?sh=7f7ed65225c6
Play Sports? Don’t Worry About Coronavirus On Your Equipment
Play Sports? Don’t Worry About Coronavirus On Your Equipment Baseball bat, ball and N95 face mask. getty Coronavirus has made us all more aware of the potentially contaminated surfaces on objects we once didn't worry about touching and sharing. That includes the balls used in sports for exercise and entertainment. Because a sports ball is usually passed from one player to the next, it could potentially act as a vector that spreads the SARS-CoV-2 coronavirus. A ball's surface can become contaminated by virus particles in respiratory droplets expelled by infected people who show no symptoms (asymptomatic carriers) via breathing or sweating during physical activity, for instance. Science has some good news about sports balls: they seem to be relatively easy to disinfect, according to a study from a mainly-British multidisciplinary group led by cancer researcher Justin Stebbing at Imperial College London and fund manager Peter Davies, non-executive chairman of Oxford Sciences Innovation. The new study involved testing sports balls after they had been 'infected' with SARS-CoV-2. Their contaminated surfaces were disinfected using common cleaning wipes and then tested for the presence of virus particles. Several experiments were carried out using various concentrations of virus and disinfectants, such as cloths containing 70% isopropyl alcohol and 'wet-wipes' similar to moist toilet tissue. The balls were those used in cricket, tennis, golf and football (soccer), covered by materials like leather, felt or plastic. A cricket balls is similar to a baseball and possibly the most-handled in sport. MORE FOR YOUGoogle Earth's New Timelapse Feature Lets You See How Our Planet Has Changed In Four DecadesHow Long Does Immunity To SARS-CoV-2 Last?5 Puzzles About The Universe That Keep Scientists Up At Night One experiment used a solution of SARS-CoV-2 diluted by 50%. The solution was smeared onto the balls, which were rolled around a grass field for 5 minutes to mimic having been played with, then wiped thoroughly with an alcohol-based cloth for 2 minutes, rinsed with water and left to dry at room temperature for 2 hours — a procedure replicating how a ball might be disinfected after a game — while the dilution might represent mixing infectious respiratory droplets with sweat. No virus was detected following that experiment. In another experiment, undiluted virus was pipetted directly onto the surface of a cricket ball, mimicking the effect of a cough, spitting or sneezing into a player's hands. Virus was detected on the ball's surface 1 hour later, but when the solution was diluted by 50%, the virus could only be detected 5 minutes later. That result illustrates the importance of disinfecting sports equipment after use. Surprisingly, one experiment showed that the cloth used to clean the surface of a ball doesn't matter much: regardless of whether a wet-wipe or dry paper tissue was used, no virus was detected from a 50% diluted solution. That suggests that in real-world settings, where droplets have dried-up and any virus particles are wiped away, the ball is unlikely to be 'infectious'. Little is known about environmental contamination by SARS-CoV-2, and we still don't know how long it lasts on surfaces. According to a review led by Günter Kampf of the Institute of Hygiene and Environmental Medicine in Germany, coronaviruses persist on inanimate surfaces for between 2 hours to over a month — depending on conditions such as temperature and humidity. Although the new study focused on sports balls, its results can be compared to previous research. An early study under laboratory conditions found that SARS-CoV-2 remains stable on metals and plastic for three days. Such materials are used to make objects with a large surface area, like furniture, so testing the materials used in small balls provides an interesting contrast. Overall, the results suggest that the concentration of virus affects whether it is detected later, and that you might prevent a ball from becoming a vector for transmission with relatively little cleaning. As the study's authors conclude, "Sports objects can only harbour inactivated SARS-CoV-2 under specific, directly transferred conditions, but wiping [...] removes all detectable viral traces. This has helpful implications to sporting events." The implications will be reassuring for those of us who play with our balls on a regular basis. That group includes not only amateur sportsmen and people who use them in physical activity (like an exercise ball at a gym) but also professional athletes who are constantly handling balls that have been touched by others. At a time when any news about Covid-19 is almost always depressing, being entertained by sports and science stories is probably good for mental health. Full coverage and live updates on the Coronavirus
26b9d67f497e214f9f156d6c8860ebcc
https://www.forbes.com/sites/jvchamary/2021/02/25/coronavirus-star-wars-mrna-vaccine/
This Star Wars Analogy Explains How RNA Vaccines Work
This Star Wars Analogy Explains How RNA Vaccines Work Coronavirus Death Star and LEGO X-wing starfighter. CC BY 4.0 JV CHAMARY / Adapted from Public Domain images by Alissa Eckert & Dan Higgins (https://phil.cdc.gov/Details.aspx?pid=23311) and Pascal (https://flic.kr/p/Kum26i) Webcomic XKCD recently used Star Wars to illustrate how mRNA vaccines work. The analogy revolves around how the Rebel fleet destroys the Death Star after seeing its blueprints, which is similar to how your immune system can defeat a Coronavirus infection after cells read the messenger RNA (mRNA) in a vaccine. Using mRNA is counterintuitive. Conventional vaccines expose you to a specific 'antigen' — a molecule that triggers immune cells to generate antibodies — in advance, so your body can later target a virus' antigens if you're infected. But the new vaccines developed by Pfizer/BioNTech and Moderna work differently. Instead of delivering the antigen itself, they contain the genetic instructions — encoded in an mRNA molecule — so your body makes the antigen. Specifically, it produces the spike protein that lets SARS-CoV-2 coronavirus invade cells. That approach is actually a bit weird, as highlighted by XKCD's comic strip. In the first panel, a stick figure injecting the vaccine says "Your body reads the mRNA, makes the proteins, and then has an immune reaction to them." The person receiving the jab asks, "Why would my body attack something it made itself?" MORE FOR YOUWhat Yuri Gagarin Saw From Orbit Changed Him ForeverSeaspiracy: A Call To Action Or A Vehicle Of Misinformation?A Crescent Moon Kicks-Off Ramadan 2021 Then Skims Mars And Uranus: What You Can See In The Night Sky This Week The Rebel fleet attacks a fake Death Star. https://xkcd.com/2425/ The comic answers that question through a hypothetical sequence that could have occurred before the events of Star Wars: A New Hope, which ends with the Rebel Alliance destroying the Empire's moon-sized battle station, AKA the 'Death Star'. In the strip, Princess Leia gives the Death Star blueprints to the Rebels. But following miscommunication down the chain of command, those plans end-up in the hands of a construction crew, which interprets them as an order to actually build the 'big metal orb thing'. Starfighters destroy the fake Death Star. https://xkcd.com/2425/ After seeing the giant sphere in orbit, the Rebels think it's a genuine Imperial battle station so they send a fleet of spaceships to attack the fake Death Star. They eventually blow it up after finding a weak spot, a 'thermal exhaust port'. A real Death Star appears a few months later and, mirroring the movie plot, the Rebel ships immediately destroy the Imperial battle station after targeting its thermal exhaust port. The last few panels therefore reflect how a vaccinated person's defences can target a virus antigen that it's encountered before. The Rebel fleet attacks a real Death Star. https://xkcd.com/2425/ The Star Wars analogy is more sophisticated than it seems. After reading several incorrect interpretations on Explain XKCD and Reddit, I realized that the strip requires some further explanation on what the analogy's main elements represent. The Rebel Fleet The Rebel Alliance is the human body and the construction crews are the protein-making machinery within cells. The Rebel fleet is the immune system and consists of spaceships that shoot 'proton torpedoes' at the Death Star, such as X-wing and Y-wing starfighters that could represent the B-cells and T-cells of the adaptive immune system. In XKCD's comic, Leia shouts "Build more torpedoes!" and "Keep building ships!", which is analogous to how the adaptive system produces a variety of immune cells with the aim of creating, for example, one B-cell with an antibody that matches a specific region of the Coronavirus spike protein. The matching antibody will recognize and bind to spike proteins. That allows other parts of the immune system to recognize, neutralize and destroy anything covered in those proteins, such as virus particles or infected cells — like a proton torpedo hitting the exhaust port and causing the Death Star to explode. After winning a battle, some cells remain to remember the weak point on the virus' spike protein, meaning that veteran pilots are 'memory cells' of the immune system, which is what provides you with protection from infection, or immunity. The Death Star The Galactic Empire or Imperial fleet is Coronavirus (or indeed any germs) and a Death Star is any object that carries spike proteins. That last part is probably a bit confusing. It's because in the XKCD comic, the Fake Death Star is a human cell making spike proteins and the Real Death Star is a virus particle covered in spike proteins. So although the two look the same, they represent different objects: either a cell or a virus (the objects are different sizes too, but that's not particularly important). At one point the analogy fails because the Rebel construction crew's foreman mentions that they "finished building" the fake Death Star and "don't even have the laser thing wired up", which suggests they built an entire virus particle — both inside and out. That contradicts the fact that mRNA vaccines only carry the gene for making the spikes, not a whole Coronavirus. Those genetic instructions are more like blueprints for one section in the battle station's outer hull. The error could have been avoided by depicting the fake Death Star as a Rebel factory in space. That factory would manufacture the outer sections — which carry Imperial markings — until they end-up being surrounded and enclosed in a metal orb, causing the factory to be mistaken for an Imperial battle station. Princess Leia The Death Star plans are mRNA and Princess Leia is the vaccine that delivers those blueprints for the spike protein (Death Star) to the human body (Rebel Alliance). Leia is perfect for the analogy because she's initially a member of the Galactic senate who becomes a General in the Rebel Alliance, just as the gene for making the spike protein was originally derived from a virus but is repurposed for use in a vaccine. She carries information from a bad germ that becomes good for your health. XKCD makes an error in a speech bubble, saying "the vaccine is just blueprints." That's not true because almost all vaccines contain not only active agents — in this case, mRNA — but also special ingredients called 'adjuvants'. Adjuvants are substances (such as aluminium salts) that aren't commonly found in the body and help draw your immune system's attention to the active agent. That detail can extend the analogy. In Star Wars, if the Death Star plans were handed to Rebel leaders by some random low-ranking freedom-fighter, they might not have been taken seriously. But as a Princess and Senator, Leia's status adds gravitas and authenticity to what she delivers — much like the adjuvants in a vaccine. XKCD's comic strip is a good starting point for understanding how mRNA vaccines work, and only needs a few tweaks to turn it into an even better Star Wars analogy. Full coverage and live updates on the Coronavirus
368c4fc084d2a71d19b689057b91ac45
https://www.forbes.com/sites/jvchamary/2021/02/26/coronavirus-google-ngram/
What Did Google Know About Coronavirus Before The Pandemic?
What Did Google Know About Coronavirus Before The Pandemic? Syringes in front of Google logos. NurPhoto via Getty Images Coronavirus became etched into popular culture after the SARS-CoV-2 virus spread across the globe in 2020, but people have been aware of coronaviruses for decades. Was 'coronavirus' well-known before the Covid-19 pandemic? One way to estimate a word's popularity is to ask Google. More specifically, you can use Ngram Viewer, a tool that displays a graph of how frequently phrases appeared in printed material between 1800 and 2019. The tool searches the 'corpus' or body of literature Google has scanned and archived, which includes 8 million books. The graphs below show results for 'coronavirus(es)' and two associated terms. For comparison, I also searched for 'COVID' — a disease that didn't exist prior to 2019 — and 'SARS', the acronym for Severe Acute Respiratory Syndrome, which can refer to both the disease and virus that causes it (now called SARS-CoV-1). There are two important dates to highlight on the timeline. SARS Outbreak: 2003 The words 'coronavirus' and 'coronaviruses' started becoming much more prevalent in Google's database in 2002, which corresponds with a sudden rise in the appearance of 'SARS'. Just as expected, 'COVID' doesn't appear at all. Google Ngram showing frequency of words over time for Coronavirus-associated search terms. CC BY 4.0 JV Chamary MORE FOR YOUWhat Yuri Gagarin Saw From Orbit Changed Him ForeverSeaspiracy: A Call To Action Or A Vehicle Of Misinformation?The Tiniest Black Hole In The Milky Way Was Right There All Along 'SARS' reached a peak in 2003, coinciding with the 2002-2004 outbreak. Following a steady drop, there was a slight increase in interest around 2013, which I would guess is related to marking the outbreak's 10th anniversary. Coronavirus Discovered: 1965 If you include 'SARS' in a search, 'coronavirus' and 'coronaviruses' are barely a blip. But as the graph that excludes SARS shows, the words in the database closely match the history of coronavirus discovery. In 1960, David Tyrrell isolated an unknown germ that causes the common cold, a strain named B814, which he described in the British Medical Journal in 1965. Tyrrell sent a sample to June Almeida, who in 1967 identified virus particles under a microscope, discovering the first coronavirus to infect humans. Tyrrell, Almeida and six other virologists wrote to Nature in 1968 about the viruses with a "fringe of projections" (now known as a crown of spike proteins) that resemble the Sun's corona, and suggested they be called "coronaviruses." Google Ngram showing frequency of words over time for Coronavirus-associated search terms. CC BY 4.0 JV Chamary Google Ngram has a few flaws and researchers should be careful of reliability. So long as you don't over-interpret the results, however, the tool provides a rough measure of cultural change through the frequency of word use. Based on the charts, my (unsurprising) conclusion is the world had lost interest in coronavirus by 2019, and it took a global pandemic for people to take note again.
afd5c6252562a96e1cff24eaad166da7
https://www.forbes.com/sites/jvchamary/2021/02/27/coronavirus-double-masking-mask-brace-fitter/
Which Works Better: A Mask Brace Or Double-Masking?
Which Works Better: A Mask Brace Or Double-Masking? Anthony Fauci, double masking. Getty Images Wearing a face mask to slow the spread of Covid-19 has become common practice. But some people still wear masks incorrectly or use one that doesn't properly fit their face, which lets aerosols that carry Coronavirus particles leak out. One solution to that problem is 'double-masking' — wearing two masks at once. Another is using a 'fitter' or brace to secure a mask more snugly to your face. Both approaches aim to prevent air from escaping around the edge of a mask and force tiny droplets to be blocked by the material, which serves as a filter. What's the scientific evidence that double-masking and mask braces stop air leaking? Which approach is best? And how do they compare with N95 respirators that filter 95% of airborne particles? Double-Masking In February 2021, the Centers for Disease Control updated its guidance on effective masks, which now includes the recommendation that people 'add layers of material to a mask'. The agency suggests either using a cloth mask with multiple layers of fabric or wearing a disposable mask underneath the cloth mask. When asked about double-masking on 25 January, the President's Chief Medical Advisor and director of the US National Institute of Allergy and Infectious Diseases, Anthony Fauci, said it's "common sense that it would be more effective." Common sense isn't science though. To date, only one study has tested the effectiveness of double-masking. Led by CDC epidemiologist John Brooks, the experiment involved measuring the amount of particles that reached a dummy head from a simulated cough, which showed that a double-mask combination blocked over 85% of particles, compared to 56% and 51% respectively for the surgical ('medical procedure') mask and cloth mask alone. MORE FOR YOUApril’s Pink Moon Is Also A Super Moon: How To Catch ItAsk Ethan: Were Mars And Venus Ever Living Planets?Seaspiracy: A Call To Action Or A Vehicle Of Misinformation? The study was published in CDC's own digest, Morbidity and Mortality Weekly Report, which isn't peer-reviewed by external researchers. So while the results seem conclusive, the research would have had greater credibility if it had been submitted to an independent journal and/or a preprint server like medRxiv. Mask Brace Several studies support the hypothesis that modifications like braces/fitters enhance the effectiveness of filtering. One paper, published in JAMA Internal Medicine and led by biologist Phillip Clapp, measured efficiencies of masks and improvized face coverings. It showed that modifications enhanced the filtration of surgical masks from 39% to 80%. Another recent study (unpublished but in medRxiv) used salt to mimic aerosol transmission inside a classroom and equations to estimate the risk of infection for teachers and students. Led by mechanical engineer David Rothamer, the study found that most masks fit poorly and allowed over 50% leakage, but braces can bring a mask's filtration efficiency back up to its expected performance level. Consumers can buy a special mask brace, but they're relatively expensive: for example, Fix the Mask sell a two-pack of silicone braces (designed by a former Apple engineer) for $30. The company offers a downloadable template for making a DIY brace from a rubber sheet... if you have such material around your house. The 'double eights mask brace', a fitter made from three rubber bands and a paperclip. Runde et al (2020) JACEP Open / Wiley But you can make a brace on a budget, from three rubber bands and a paperclip, as demonstrated by doctors led by Daniel Runde in Journal of the American College of Emergency Physicians Open. The study concluded that the 'double eights mask brace' "does not create an N95 equivalent in terms of filtration" but "would offer improved protection from airborne viruses when worn with a basic surgical mask." Which is better? In terms of being effective at blocking aerosols, there's far more evidence in favor of mask braces. That doesn't mean double-masking doesn't work, but the approach needs more studies to support the claim it helps fit and filtration. There are other factors to consider besides effectiveness, however. Using two masks is wasteful, for example, while wrapping rubber bands round your head is like fixing a device that's inherently broken instead of replacing it with one that functions — in this case, a mask that fits properly in the first place. Another factor is compliance. Some people refuse to cover their face — if you can't get those folks to comply with basic guidance to wear one mask, good luck getting them to wear two. Even with the emergence of new Covid variants, it may be tough to convince people to take action: in a poll of 1984 Americans, while 61% support double-masking, only 40% do it, despite thinking it's a good idea. Telling the general public they need better masks might alienate some people to the extent that don't bother wearing anything, which is worse for public health. As epidemiologist Saskia Popescu says, "Focus on finding one quality mask that meets the mark, versus trying to layer masks and create discomfort, difficulty breathing... or frustration that might lead to no mask at all."
c1bc9bbbb5e919bef006d85152cf949a
https://www.forbes.com/sites/jvchamary/2021/03/27/coronavirus-asymptomatic-proportion/?sh=50d0a5e6186b
At Least A Third Of Coronavirus Infections Are Asymptomatic
At Least A Third Of Coronavirus Infections Are Asymptomatic Coronavirus temperature checkpoint. getty The pandemic has corrected several common misconceptions about health, like the assumption that you only catch and spread infectious disease when you seem sick. At one extreme, the measles virus always reveals signs of infection, whereas at the other end, many of those infected with polio virus show no clear symptoms. Where does the SARS-CoV-2 coronavirus sit on that scale of causing symptoms? Researchers have now estimated the proportion of infected people who never develop symptoms of Coronavirus Disease. The research by Daniel Oran and Eric Topol from Scripps Research Translational Institute in La Jolla, California, involved a systematic review of reports that tested for Covid. Those tests either looked for current viral infection through PCR (polymerase chain reaction) analysis or via past infection, as indicated by antibody testing — the presence of antibodies against the SARS-CoV-2 virus. Oran and Topol's review, published in Annals of Internal Medicine, found 61 reports, 43 of which used PCR after collecting nose/mouth swabs, and 18 that had performed antibody testing. The study aimed to count the number of people who never have symptoms of Covid — asymptomatic cases — and exclude those who initially show no signs but then eventually develop the disease. As it's only possible to identify the latter — presymptomatic cases — in retrospect, the study only considered reports with a follow-up period that tracked whether Covid appeared later. Among the reports, the best data came from large-scale surveys in England and Spain, which tested antibodies in over 365,000 and 61,000 patients respectively. Results from those two surveys were almost identical: 32.4% of England's cases were asymptomatic, while Spain's stats were 33%. MORE FOR YOUSeaspiracy: A Call To Action Or A Vehicle Of Misinformation?A Crescent Moon Kicks-Off Ramadan 2021 Then Skims Mars And Uranus: What You Can See In The Night Sky This WeekNASA Teases A Mars Base Made Of Mushrooms, A Swarm Of Spacecraft To Venus And A Giant Dish On The Moon The review therefore suggests that at least one-third of Coronavirus infections are asymptomatic. There are a few caveats. False positive results from PCR and antibody tests can lead to an overestimate of Covid cases, for example, wheres false negatives mean an underestimate, so you assume everything evens-out. A survey will also rely on accurate self-reporting by its participants, which requires people to try and recall whether they experienced any symptoms weeks or even months earlier. It's also important to note that the results don't mean that only a third of the infected individuals you might encounter won't show symptoms. Remember that many people are asymptomatic during early infection but turn out to be presymptomatic cases. As a consequence, the proportion of people who are walking around without any apparent symptoms is actually higher than one-third. A few studies of both asymptomatic and presymptomatic cases and found that those infected people contribute more than 40% to Coronavirus transmission. Full coverage and live updates on the Coronavirus
fdb8ef303607a529be2e2a8286ddf8d5
https://www.forbes.com/sites/jvchamary/2021/03/29/what-is-life-definition/
What Is Life? Here’s Why There’s Still No Definition
What Is Life? Here’s Why There’s Still No Definition Coxiella burnetii bacteria (green) inside a Vero cell (orange). CC BY 2.0 NIAID Asking a biologist to define 'life' is a bit like looking-up the word 'definition' in a dictionary — it's as if you're questioning their very existence. Biologists like myself don't agree on what life actually is, in no small part because definitions don't encompass its diversity, especially at the edges. One of my favorite science writers, Carl Zimmer, has just published a book entitled Life's Edge: The Search for What it Means to be Alive, along with an excerpt on the frustrated efforts to develop a universal definition of life. I'll discuss the new book below, but given that my article on the challenge of defining life is now two years old, let's first revisit the fundamental issues. Scientific criteria For centuries, scientists and philosophers have proposed hundreds of definitions of life. None have been widely accepted. In 2011, biophysicist Edward Trifonov tried to find consensus among 123 definitions by grouping the words they contained into clusters that had similar meanings. He then combined the most-frequently used word from each cluster to produce a 'minimal' definition: Life is self-reproduction with variation. One criticism of Trifonov's definition is that it defines life as the outcome of replication and mutation, two processes that create the variety that lets nature 'select' which individuals live long enough to reproduce. It's therefore missing evolution by natural selection (adaptation or 'survival of the fittest') — the vital process that enables a population of organisms to adapt to an ever-changing environment. MORE FOR YOUApril’s Pink Moon Is Also A Super Moon: How To Catch ItAsk Ethan: Were Mars And Venus Ever Living Planets?Seaspiracy: A Call To Action Or A Vehicle Of Misinformation? Another problem with a concise or minimal definition is that it's easy to come up with exceptions. A computer virus can also copy itself and mutate, for example, while some organisms reproduce by making clones that are genetically identical. Textbooks traditionally describe life with a list consisting of two kinds of essential features (properties): physical characteristics such as cells and DNA — what life has — and processes like growth and reproduction — what life does. But, as with Trifonov's definition, the trouble with such a list definition is that you can easily think of numerous counter-examples that don't meet all the so-called essential criteria. Some biologists don't believe that a virus is alive because it can't reproduce outside a host cell, for example. My argument against such belief is that we all accept that a bacterium is alive, and yet parasitic bacteria like Coxiella burnetii can't live independently either — they're obligate intracellular parasites. And that's just life as we know it, here on Earth. If you're searching for life on other planets, you have to drop almost every item from a list — including cells and growth — as you obviously can't see such features from trillions of kilometres (light-years) away. As a consequence, astrobiologists ignore terrestrial signs of life and instead try to detect 'biosignatures' — objects, substances or patterns that could have been produced by extraterrestrial life-forms. For life that's even more alien, on our world or elsewhere, you enter the realm of science-fiction, such as artificial intelligence (AI) in form of characters like Data from Star Trek. Philosophical arguments The excerpt from Life's Edge mentions that asking 'What is life?' can be compared to asking another question that's hard to answer: 'What are games?' That comparison is from a philosophical concept devised by Ludwig Wittgenstein, who claimed that some things don't have a single feature that's common to all, but overall they share a whole series of features, a kind of 'family resemblance'. Inspired by Wittgenstein, a multidisciplinary team at Lund University in Sweden (mostly philosophers, theologians and other non-biologists) compiled a selection of various things — everything from animals and plants to viruses and snowflakes — and a list of features associated with living things, such as DNA and metabolism. The Lund team then carried out a survey of scholars, asking the participants to tick boxes on a checklist of (what they considered relevant) features for each thing. The study used statistical clustering to group 'families' of things sharing common features, which grouped mice, birds and other animals with a brain together, while the brainless plants and bacteria were in a different cluster. Any decent biologist wouldn't be surprised by those results, and might add that the approach was naive. The mistake the Lund team made is that most people look at the natural world with a human-obsessed, 'anthropocentric' perspective. That explains why the things that most resemble us, with brains, clustered together. The approach was biased by the initial choice of features. Take the SARS-CoV-2 coronavirus, say, which has RNA (not DNA) for its genetic material: if viruses could send surveys to one another, the study's results would have been very different. Zimmer's new book also includes a profile of philosopher Carol Cleland, who has published dozens of papers on detecting or defining life, as well as a 2019 book called The Quest for a Universal Theory of Life. At one 2001 meeting, Cleland told an audience dominated by scientists that "the whole definition project was worthless." Zimmer paints her as a lone radical, but Cleland isn't alone in her opinion that definitions are a waste of time. In 2011, philosopher Edouard Machery used a Venn diagram of features to try and identify overlap between a hypothetical evolutionary biologist, an astrobiologist and an AI researcher. Machery concluded that "the project of defining life is either impossible or pointless." According to Cleland, "Definitions are not the proper tools for answering the scientific question 'What is life?'" Note that she specifically says scientific, which I take to mean definitions used in science, like a working definition that astrobiologists might use so that everyone's on the same page when searching for alien biosignatures. The scientist in me mostly agrees with Cleland's philosophical position but, as a science communicator, I also think there's another important factor to consider. Public understanding Imagine that a child is just discovering nature and asks their parent or teacher "What is life?" Responding with "Defining life is pointless and worthless" would not only make you kind of an asshole, it might also kill the kid's curiosity. Better to give them a clear statement first, then add caveats to encourage further investigation later. While some philosophers don't want a universal definition and many scientists don't really need one, there's a third group of people who do need a definition of life: the general public. For the public, the question of 'What is life?' revolves around language and the meaning of words — it's a semantic issue. The semantics aren't trivial either. Dictionary definitions are wrong because they send people round in circles. A dictionary will use words like 'organisms', so a statement's logic is circular (a tautology) because it uses an example of life to define life, which is ridiculous. The public needs a folk definition — one that's mostly right and makes intuitive sense but removes the logical circularity found in dictionaries. I'd love to see a linguist produce a sentence that works. Until then, I'd humbly suggest using my 'popular definition': Life is an entity with the ability to adapt to its environment. If you think you have a better one, direct it to me on Twitter (@jvchamary). Just don't ask me for a biologist's definition.
e73d8097b2e30bcda29e7862dc658fd8
https://www.forbes.com/sites/jvchamary/2021/03/31/coronavirus-evolution-in-action/
Coronavirus Variants Are Evolving – Even Inside A Single Person
Coronavirus Variants Are Evolving – Even Inside A Single Person Cancer nurse giving drugs to a chemotherapy patient. getty Evolution is something many people visualize as a process that occurs outside, in your surroundings, but it also happens in the environment of the human body. SARS-CoV-2 coronavirus is a clear example of evolution-in-action, and studies have now revealed how it causes new variants to emerge within a single person. Mutation and Variation Like a ticking clock, mutations appear continuously over time. They're the fuel that ultimately powers evolutionary change in everything from viruses to humans. Although some mutations are helpful because they make a species more successful — so SARS-CoV-2 gets better at infecting us or escaping our immune system, for instance — most mutations are harmful as they'll stop an organism's genes from working. Mutations are generally bad because "If it ain't broke, don't fix it." Mutations occur during reproduction, such as when cells divide or a SARS-CoV-2 virus particle replicates inside a cell. It's often due to genetic material — DNA or RNA — being copied incorrectly. When that copying process introduces an error or typo, like substituting the chemical letter 'C' for 'T', it's a 'point mutation' or substitution, while missing letters are called 'deletions'. Mutations create mutants, organisms that differ from a common 'wild type'. When humans judge that a mutant is significantly different — say, because it escapes immunity — then it's considered a new strain or variant, as in the case of the B.1.1.7 lineage first identified in the United Kingdom (or 'UK variant'). FIND OUT MORE: What's The Difference Between A Variant, Mutant And Strain? MORE FOR YOUSaturday’s Google Doodle Celebrates Physicist Laura BassiApril’s Pink Moon Is Also A Super Moon: How To Catch ItAsk Ethan: Were Mars And Venus Ever Living Planets? A new mutant will be genetically distinct from its parent — such as the virus version that initially infects you — but only by a few letters of DNA or RNA. The likelihood of creating a 'better' variant is a numbers game: the probability that a given mutation will help SARS-CoV-2 is extremely small but it's not zero. While the chance may be less than one-in-a-million, thousands of virus particles are replicating in billions of cells inside millions of people, so odds of such a 'rare' event drop dramatically — to the point where it becomes very likely, even inevitable, that variants will emerge within the global human population. Mutations accumulate relatively slowly. Among the trillions of virus particles in a single human body, the viral gene pool might only contain a couple of new mutations. That's over the course of a normal acute infection with SARS-CoV-2, however, which lasts 2 weeks on average with a short infectious window (when an infected person can potentially transmit the virus) of only a few days. Coronavirus can more easily accumulate mutations in one group, however: people with long-term chronic infections. That accumulation can happen if someone's immune system is naturally compromized because they have an underlying health condition, for instance, or because it's been artificially suppressed by drugs in chemotherapy. Natural Selection Nature selects among a variety of individuals based on their ability to survive and reproduce — whether that's prey that evade predators or viruses that escape an immune system. That selective pressure from the environment is what can force a population of organisms to adapt, driving evolution by natural selection. Several studies have tracked the evolution of SARS-CoV-2 variants in chronically-infected people. In such cases, researchers took samples from each patient and read the sequences in the viral gene pool to detect the presence of new mutants as they emerged. Through repeated sampling and sequencing, the scientists have identified variants that would provide the raw material for natural selection. One study, led by Adam Lauring from the University of Michigan in Ann Arbor, described the case of a 60-year-old man with a history of lymphoma — cancer of the lymph nodes, which prevents the immune system's B-cells making antibodies. Over four months, the immunocompromized patient was in-and-out of hospital three times due to Coronavirus Disease, and that prolonged infection enabled a steady accumulation of mutations. Nine mutations became prevalent (or 'fixed') in the viral population between days 93 and 106. The fact that the man was repeatedly readmitted put other patients at risk of Covid as he would have continued to shed virus particles. As the Michigan study concluded, "This case highlights challenges in managing immunocompromized hosts, who may act as persistent shedders and sources of transmission." Another study, led by Ravindra Gupta at Cambridge University, tracked SARS-CoV-2 evolution during treatment of an immunosuppressed man in his 70s. The patient's viral gene pool was sequenced 23 times over 101 days, so the fate of mutations could be followed in detail. He was treated with remdesivir (not effective) and convalescent plasma containing antibodies from someone who recovered from Covid. FIND OUT MORE: The Strange Story Of Remdesivir, A Covid Drug That Doesn't Work Convalescent therapy led to the emergence of a variant with the D796H mutation and a deletion of two amino acids — ΔH69/ΔV70 — in the spike protein, which is what coronaviruses use to break into a cell. According to the study, that mutant became the dominant variant following competition among the patient's variants — evolution by natural selection. The Cambridge study also used artificial viruses to show that the D796H mutation made spike proteins less susceptible to being neutralized by a matching antibody but also less effective at invading cells, whereas the ΔH69/ΔV70 deletion seemed to compensate by restoring the virus' ability to bind a cell's surface. Interestingly, ΔH69/ΔV70 has also been deleted in the B.1.1.7 variant, which seems to have 50-70% higher transmissibility compared to the wild-type virus. So, as in the immunosuppressed patient, the deletion might have been favored by natural selection because it made the variant become more infectious and spread. Based on a study led by Tanya Golubchik of Oxford University, the good news is that mutations that might help Coronavirus appear very rarely. According to the research, which used sequencing to measure genetic diversity across 1313 British people, most people carried distinct variants — but only one or two per person. The Oxford study also examined transmission between people who come into regular contact — in the same household — and found that most variants are lost before they spread. That result suggests the vast majority of potentially dangerous new mutations are evolutionary dead-ends that are destroyed by the immune system. The environment inside you — the human body — can be too harsh for Coronavirus. Full coverage and live updates on the Coronavirus
fc6e75611a90df95f7d893629d8388c4
https://www.forbes.com/sites/jvchamary/2021/03/31/the-coronavirus-variants/
To Understand Covid-19 Variants, Stop Saying ‘The’ Coronavirus
To Understand Covid-19 Variants, Stop Saying ‘The’ Coronavirus Rand Paul questions Anthony Fauci. Getty Images Precise language is crucial for the public understanding of science. Even a single word matters, which includes using the definite article – 'the' – before 'virus'. I recently watched Senator Rand Paul, a politician, question Anthony Fauci, director of the US National Institute of Allergy and Infectious Diseases (NIAID). During a heated exchange, Paul attacked Fauci for wearing a face mask despite the NIAID director having received a vaccine against the SARS-CoV-2 coronavirus. Paul refers to "reinfections" of previously-infected/vaccinated patients, asking for evidence to support the suggestion that people should wear masks into 2022. As Fauci responded, "When you talk about reinfection and don't keep-in the concept of variants, that's an entirely different ball game. That's a good reason for a mask." He continued: "In the South African study, conducted by [Johnson & Johnson], they found that people who are infected with wild type and were exposed to variant in South Africa, the [B.3.5.1], it was as if they had never been infected before, they had no protection. So when you talk about reinfection, you've got to make sure you're talking about wild type." If the phrase 'wild type' went over your head, don't worry about it — you don't really need to know such technical terms, and even scientists haven't agreed on which words to use when describing distinct kinds of Coronavirus. MORE FOR YOUApril’s Pink Moon Is Also A Super Moon: How To Catch ItSaturday’s Google Doodle Celebrates Physicist Laura BassiAsk Ethan: Were Mars And Venus Ever Living Planets? READ MORE: What's The Difference Between A Variant, Mutant And Strain? The mistake Rand made was in using the word "reinfection". To state the obvious, reinfection only occurs if a person is infected more than once by the same virus. But 'the' virus refers to the whole SARS-CoV-2 species. If someone is infected by two different variants — wild type first, B.3.5.1 second — then that's actually two separate infections, not "reinfection". The error probably stems from the fact that many people have conceptualized 'the' virus as a single entity, when it's actually a population of viruses in which evolution is constantly causing new variants to emerge. READ MORE: Coronavirus Variants Are Evolving – Even Inside A Single Person Many media outlets continue to say 'the' Coronavirus. Until that stops, people will keep thinking of SARS-CoV-2 as a species without variants – and fail to understand infection. Full coverage and live updates on the Coronavirus
10d32eb74799d1699de6d283503e1cbe
https://www.forbes.com/sites/jvdelong/2015/10/15/mcdonnell-and-the-supreme-court/
SCOTUS Should Hear Robert McDonnell's Case--And He Should Win
SCOTUS Should Hear Robert McDonnell's Case--And He Should Win Robert McDonnell, the Virginia ex-governor who was convicted on corruption charges in 2014, has asked the U.S. Supreme Court to hear his case. It is important that the Court agree, and that McDonnell win, not because of any deserts on his part, but as a step back from the frightening system of over-criminalization combined with random and politicized enforcement that is taking over the legal system. One can see why the jury convicted both McDonnell and his wife. For five weeks of trial, it was deluged with sordid details of gifts to the pair of them by a corporate executive angling for support. In particular, he would have benefitted from a decision to put state research money to work testing his company’s drug. The gifts themselves were not illegal under Virginia law, so, to find a federal crime, the government had to rely on various statutes prohibiting the receipt of favors in exchange for “official acts.” However, nothing of any moment followed from the largesse. Whether by accident or design, McDonnell seems to have followed the advice of Jesse Unruh, a legendary California pol of the 1960s, "If you can't drink a lobbyist's whiskey, take his money, sleep with his women and still vote against him in the morning, you don't belong in politics." No actual votes or executive actions were involved, and the only quids the government could find for the company’s quos consisted of a few trivial events, such as a pointless meeting with staff and attendance at a large reception. The conclusion is that the prosecutors, endorsed by the Fourth Circuit Court of Appeals (its opinion is attached to the petition to SCOTUS), is saying that anything a governor does that might promote the interests of a donor is an official act because his expression of interest and friendship will influence others in the government even if he takes no real action. So if making a gift gets you an invitation to a reception or a wave in a crowded restaurant, the governor has just committed a crime. An irony of this view is that a Virginia governor, who must be interested in economic development, has strong reasons to call the products of Virginia companies to the attention of other parts of his administration regardless of any personal ties. So under the prosecutors’ theory, it is a crime to spotlight the work of friends and supporters, but not that of others. An interesting view, but not quite congruent with politics or human nature. Other Courts of Appeals have tried to impose limits on the federal corruption statutes by cabining the concept of “official acts,” but the Fourth Circuit would have none of it. Inviting the corporate people to sit next to the governor’s wife at a political rally qualified as a corrupt act. By the standards of the McDonnell verdict, we better start a massive prison construction program, because all politicians of all parties rather clearly belong in jail. In March 2014, six months before McDonnell was convicted, the Washington Post reported: For $100,000, you can have a private dinner with Virginia Gov. Terry McAuliffe and the first lady, participate in a roundtable discussion with the governor and sit down every month with “policy experts.” McAuliffe (D) this week announced the formation of a political action committee, Common Good Virginia PAC. The announcement came with a list of events that donors may participate in for donations ranging from $10,000 to $100,000. Perhaps the U.S. Attorney in Richmond does not read the WaPo, because this makes McDonnell’s sins look petty. But while McAuliffe skates (and Hillary Clinton, and Jon Corzine, and . . . , and . . .), the prosecutors wanted six and a half years in jail for McDonnell. (He got two.) McAuliffe commented on the McDonnell sentence: Today’s sentencing brings an end to one of the most difficult periods in the history of Virginia state government. Like many Virginians, I am saddened by the effect this trial has had on our Commonwealth’s reputation for clean, effective government. As we put this period behind us, I look forward to working with Virginia leaders on both sides of the aisle to restore public trust in our government. The restoration will be an uphill slog. Mother Jones, not exactly a member of the vast right-wing conspiracy, had this to say upon McAuliffe’s election: McAuliffe’s . . . primary role in politics for the past two decades or more has been raising money—most notably, for the Clintons. He cooked up the idea of essentially renting out the Lincoln bedroom during the Clinton administration as a fundraising vehicle, and he smashed all previous presidential fundraising records in the process. . . . That alone might not be enough to render him a distasteful political candidate. What's different about McAuliffe is his brazen mixing of his campaign fundraising activity and attempts to enrich himself personally. Many of McAuliffe's business deals have come about due to his place in the political cosmos, not because he possesses a wealth of business skill. That tangled history has linked him to a long list of unsavory characters. It is a bit difficult to sympathize with McDonnell, because he seems guilty of, at the least, incredible stupidity. For one thing, a cautious politician does not take personal gifts; he gets campaign contributions. Or sets up a family charity which then pays many of the pol’s personal expenses. Why didn’t McDonnell know this? Nonetheless, we are heading toward a system in which everything is illegal, and it is up to the prosecutors, in their untrammeled discretion, to determine who gets charged. To a discomfiting degree, we are already there. The idea of sending the whole political class to jail has its appeal, but one should resist the frisson of delight, because the realistic outcome will be total discretion on the part of prosecutors. It would behoove the SCOTUS to take a step back. McDonnell may not be very sympathetic, but his cause is.
7b3cb9fa2d7f0222ed6ef91d3765b4a5
https://www.forbes.com/sites/jvdelong/2016/10/21/originalist-sin/
Originalist Sin
Originalist Sin Last Monday, a document was released on the web, Originalists Against Trump, calling for the nation “to deny the executive power of the United States to a man as unfit to wield it as Donald Trump.” Its claim to attention lies in its roster of 54 signers. They are distinguished academics, lawyers, and all-round intellectuals associated with the “Originalism” legal school, the proposition that the only proper way to interpret the Constitution is by examining the meaning of the words as used by the drafters and understood by contemporaries. The opposing school is the “Living Constitution”, which believes that judges should have freer rein to adopt our great charter to the needs of their own time, as seen by them in their idiosyncratic wisdom. The debate can be conducted at a high level of sophistication (or not), and Wikipedia is useful for those want a quick tutorial. For present-day purposes, the Living Constitutionalists endorse the Supreme Court’s continuing expansion of rights since the days of the Warren Court in the 1960s, especially the decisions creating a right to abortion and the more recent decisions making sexual freedom into a right. Originalists, finding no such rights in the Constitutional text or history, would leave such issues to the political process. On the other hand, Originalists are fierce defenders of the rights of free speech and bearing arms enshrined in the First and Second Amendments. Living Constitutionalists are amenable to diluting these to reflect new theories of social justice. Originalists are also open to paring back governmental powers somewhat, on the ground that the Constitution was not intended to be an unlimited enablement. The Living Constitutionalists regard the untrammeled state as a fine thing, unless it tries to interfere with sexual rights. Each side is associated with a political party, the Originalists with the Republicans and the Living Constitutionalists with the Democrats. From early in the campaign, Trump was suspected of being a secret Democrat, insufficiently dedicated to the causes of Originalism and limited government. To neutralize this concern, Trump released a list of 21 conservative potential nominees, all of whom should be acceptable to the tenders of the Originalist flame. He said: “This list is definitive and I will choose only from it in picking future Justices of the United States Supreme Court.” In the final debate, Trump reaffirmed this pledge, and his commitment to Originalism. Clinton, of course, is as far from this position as possible. Her view as expressed during the debate was summarized by Power Line: Clinton’s answer shows that she thinks interpreting the Constitution means “standing up on behalf of women’s rights” and “on behalf of the rights of the LGBT community.” It also means ignoring the rights of “powerful corporations and the wealthy. In other words, cases should be decided based (at least in part, though Clinton never qualified her answer this way) on the gender, sexual preference, or wealth of the litigants, not on what the Constitution or applicable statute provides. The Constitution, in her view, is not a set of rules that applies equally to everyone, but rather a means of favoring members of groups that appeal to Hillary Clinton. It is a promise, as Power Line says, to nominate only judges who guarantee that they will ignore their Oath of Office. Given the starkness of the contrast between the two positions, the Originalists Against Trump manifesto, and particularly its shrillness, presents a puzzle. On the key point of Trump’s list of nominees, it says “we do not trust him.” This is weak to the point of dementia. Trump made the promise to appoint only from the list to reassure Republican conservatives, especially supporters of Ted Cruz (e.g., me). He thanks the Federalist Society and the Heritage Foundation for helping him, and both are at the apex of the conservative hierarchy. Were he to break the promise, the nomination would be DOA on the Hill, and he would alienate forever an important wing of support. Whatever one thinks of Trump’s character, surely his self-interest would prevent this. Besides, it is passing odd to say “Trump says he will appoint good judges, but I do not trust him, so I support Clinton because I can believe her promise to appoint bad ones.” The bulk of the document assaults Trump’s character, saying his record shows him “indifferent or hostile to the Constitution’s basic features—including a government of limited powers, an independent judiciary, religious liberty, freedom of speech, and due process of law.” (No specifics are cited.) As for Hillary, the signers “are under no illusions . . . that Hillary would be any friend to originalism”, but they reassure us that “our country’s commitment to its Constitution is not so fragile that it can be undone by a single administration.” Again, one’s reaction is “what planet are you on?” Many of us have had our reservations about Trump, even as we hope these are mistaken, but it would be hard for him to outdo Clinton, and the Democrats generally, in contempt for the Rule of Law. The Democrats as a party are opposed to the guarantees of free speech and freedom of religion in the First Amendments and to the right to bear arms of the Second. They happily violate the spirit of the Fifth Amendment (against taking property) at every opportunity. As Kevin Williamson of the National Review (also a hive of NeverTrumpers, which seems odd) said today:  "In response to Citizens United, the Democratic Party has attempted something truly remarkable and flatly insane: the repeal of the First Amendment. Every Democrat in the Senate . . . [voted for this]." (This effort was three years ago, but the assault continues.) Obviously, these provisions of the Constitution will not be forthrightly repealed because of resistance in the states, but Clinton promises to do serious damage to the values they stand for as soon as possible by appointing Justices who will reinterpret precedents on their meaning. And, if the signers of the manifesto care about the Rule of Law, how can they ignore the damage to our freedoms done by weaponized agencies, selective criminal prosecutions, and imperial Executive Orders? Nor do the signers discuss the growth of the Administrative State. Antonin Scalia was a fine Justice in many ways, and a saint of the Originalist movement, but he was myopic on the growth of uncontrolled agencies. He wrote important opinions and articles endorsing the power of an agency to determine the scope of its own authority, to make binding determinations of facts, and to pick and choose among scientific principles, including junk science, in support of its preferred outcomes. Near the end of his career, Scalia came to wonder if all this was a good idea, but it was too late and he did little about it. The Administrative State rampant is one of the biggest Rule of Law issues of our time, and so far Originalism has been largely absent (except for Justice Thomas). Perhaps the Constitution can survive four years of a Clinton administration, but many regard this as a very contestable proposition. So one is left with a puzzle. How can 54 distinguished persons (52 men and only 2 women – maybe women really are smarter) produce such a strange document? And they really are distinguished – the best-known public name is George Will, but the list includes Richard Epstein, arguably the most acute legal thinker alive today, and others of comparable caliber. No one I have talked with has a good explanation. One possibility is that the legal culture has become so poisonous that the signers believe they must protect themselves by signaling their virtue to the onrushing Clintonite hordes. Or perhaps they regard Clinton with total cynicism, assuming that her left-wing promises are bait for ill-informed Sanders voters, and that her governing style will be determined by the foreign kleptocrats and domestic corporate cronies so prominent in the Wikileaks documents. Another comment is that a sense of moral superiority can turn into self-indulgence. Or pure snobbery may be at work; a group of distinguished intellectuals cannot bear to depend on a Trump, whom they regard as a low-rent thug and intellectual lightweight, as their champion. As for me, I will take my champions where I find them, and I would emphasize that we elect an Administration, not just a President. Far better that the Rule of Law be entrusted to Trump, abetted, as it will be, by Giuliani, Cruz, McConnell and the Senate, Ryan, the Federalist Society and the Heritage Foundation that to Clinton and the Democrats. But that’s just me – I would rather preserve the Constitution than my own sense of purity.
0ca09e3b9c91255f450e79c4212948bd
https://www.forbes.com/sites/jwebb/2015/11/23/why-supplier-collaboration-projects-keep-hitting-the-wall/
Why Supplier Collaboration Projects Keep Hitting The Wall
Why Supplier Collaboration Projects Keep Hitting The Wall We are taught at business school that collaboration is good. Be nice to your suppliers, share a bit of information, give them some business and you’ll be rewarded with a treasure trove of goodies. Such partnerships are the way to securing a stable, long-term supply, greater innovation and even better quality products. But, success stories are few. Many companies are disappointed by the results in their efforts to work more closely with suppliers. So what’s going wrong? A recent academic study may provide some answers. In ‘Why supply chain collaboration fails: the socio-structural view of resistance to relational strategies’ a team of authors in the journal Supply Chain Management argue that the organizations hit walls of resistance (partly of their own creation) which prevent collaborations succeeding. The problem is not with the idea of collaboration, but the way many have attempted to apply it. The solution lies in preparing your own organisation for the change. The authors (in all: Stanley Fawcett, Matthew McCarter, Amydee Fawcett, G. Scott Webb and Gregory Magnan) interviewed purchasing managers and suppliers from Europe and the US. And although they found a healthy appetite in business for creating collaborative relations, many were disappointed at the modest return on investment. Toyota led the way in creating a more collaborative approach to business. Its keiretsu philosophy bonded suppliers together into an interconnected and interdependent network of partners. The rapid rise of Toyota from a small local player to the biggest (and most profitable) car manufacturer in the world beguiled many management thinkers across the world. Its formula of building strong relationships with key suppliers, investing time and resource to deepen the strength of these ties, was viewed as an exemplar for all. In almost every procurement organisation, buyers reflect on their own supplier base and ponder which of these are ‘strategic’. The question for many is not so much should we collaborate? But with whom should we build a strategic alliance? The authors of this study find that many companies are almost doomed to failure before they even start thinking about strategic partnering. Within all organizations exists a 'wall of resistance' which prevents successful collaboration. There are two main resistors to change: Firstly, internal, or organizational resistors, which deprive collaboration projects of a conducive environment. This is can come about through structural inertia, or a deep-seated conservativism which is present in most large organizations and is resistant to any form of change. Resistance can also be a social factor, in which individuals feel threatened that their territory is being encroached upon. Secondly, there is resistance which emerges from the process of collaboration itself. Companies struggle to manage the complexity of the new relationships and harmonizing processes across two distinct organizations can prove too difficult. Often, buyers simply lack the skills to establish the governance structure for these new partners. Another area that the writers identified as a resistor was relationship intensity. This reflects the power imbalances between the partners, that leads one side to behave more opportunistically. This is something that we have observed in Procurement Leaders. In our research, we find that managers are more likely to declare a supplier as ‘strategic’ compared to executives. Consequently, wrong suppliers are erroneously entering into the process as buyers seek the relative prestige in engaging in ‘high-level’ summits. The problem often lies in the organisational capability to correctly segment its supplier base, and enforce compliance with this process. This can come from a lack of skills (buyers do not know how to identify with which suppliers to collaborate) or strategic capability (organisations also lack this oversight). In either case, it’s important to note that it’s not the concept of collaborative projects that fails. Rather, it is the inappropriate or over-optimistic implementation of close relationships without preparing the proper ground-work or building the capabilities needed to sustain these relationships. Buyers must be willing to invest in a targeted change management effort if they wish to forge productive collaborative partnerships. Recognizing that overcoming entrenched resistance to the new project is also vital to successful partnering. “Success comes only after managers persistently push on the flywheel to build momentum for transformation,” the study’s authors note. “With sufficient time, effort and forward motion, the momentum of the flywheel begins to help rather than hinder progress.” If a company wants to create fruitful collaborations, then it must be prepare to manage just as much internal change as externally with the supplier. Ultimately, the success of these supplier projects rest on overcoming resistance entrenched inside your own organisation.
036ce95b67e23021d7b0b4e5b2b72dbc
https://www.forbes.com/sites/jwebb/2016/07/28/anti-corruption-campaign-has-caused-chinese-slowdown-says-former-central-banker/
Anti-Corruption Campaign Has Caused Chinese Slowdown, Says Former Central Banker
Anti-Corruption Campaign Has Caused Chinese Slowdown, Says Former Central Banker Shutterstock A former policy-maker at People’s Bank of China policy-maker has attributed the country’s anti-graft drive as the main factor behind recent economic deceleration. “If you look at the slowing down of the Chinese economy, I would argue that the number one reason is not rebalancing,” Dr David Li told Channel News Asia, but rather it “is that policy makers are trying to clean up a lot of bad practices of local government”. Here, 'bad practices' is code for 'corruption'. By the standards of most Western countries, China is doing relatively well. The IMF forecast the country to grow by 6.5% in 2016. However, this is far from the 10% growth rates that the state was regularly posting in the 1990s and 2000s. Indeed, given its high inflation rate and increasingly vocal population, the elite aim to achieve a certain minimum growth rate to achieve political as well as economic stability. Much of the economic debate relates to the country ‘rebalancing’. That is, the attempt by the nation’s leaders to convert the Chinese economy from an export-orientated manufacturer to a larger consumer economy on conventional Western lines. Some are arguing that this transition is draining mental energies from business growth to complex economic restructuring. Dr Li is not one of these people. He believes that President Xi Jinping’s campaign to reduce corruption is eroding economic activity. Although the argument may seem counter-intuitive to economists, who are routinely taught that corruption is bad for business, within the Chinese context, Li’s remarks may find some purchase. The country's businesses are sustained by guanxi networks. These are close inter-personal ties between businessman that allocate resources and forge contracts in secret dealings. Such networks can even trump laws. As one Chinese businessman observes, “in China, relationships are the law”. Inevitably, such close workings can give rise to corruption. Indeed, more cynical observers believe that the bedrock of the economy are the networks of bribery, patronage and nepotism. Removing this system will invariably have an impact upon production. Li was correct in his observation that “corruption was helping the economy grow, to be very honest.” However, the question is more a matter of sustainability. Even Li acknowledges that the local population is growing tired of the shady deals by which they lives are arranged by a cabal of closely-connected businessmen. Foreign direct investment is also checked by overseas businesses fearing extortion or exposure to prosecution under the US’s stringently enforced Foreign Corrupt Practices Act. In the short term, corruption is an excellent fix to solve a business problem fast and begin productivity. But the long-term effect of trading in an atmosphere of bribery will sap the will of investors to risk their money to the whims of greedy public officials or to not engage in the market as they lack the necessary political contacts. Corruption is a barrier to entry and unless China fights graft, it will continue to suppress its long term growth ambitions. We can only hope that Dr Li’s views are not widely held within the People’s Bank of China.
84df57daef37f777874b3dd11dfebc85
https://www.forbes.com/sites/jwebb/2016/10/31/procurement-or-supply-chain-whats-the-difference-and-should-there-be-a-difference/
Procurement Or Supply Chain? What's The Difference? And Should There Be A Difference?
Procurement Or Supply Chain? What's The Difference? And Should There Be A Difference? Shutterstock When talking to executives within the space, the conversation generally begins with definitional matters: are we talking logistics here? Or contracts? Which hat should I wear? But how different are these roles? And how different should they be? I was speaking recently with a salesman from a technology supplier who shared with me his difficulty in working with large organizations. He sells services of interest to both supply chain teams and procurement departments. However, he only sells to one department. And when the product is installed, the information is not shared with the other. Rarely does he sell to both simultaneously. In fact, it takes some years for these walls to come down. Once the divisions are erased, he believes his technology can start to add real value to his client. It’s an interesting side-effect that a technology, sold to a single department can actually help bring the company together and challenge the silos under which it labors. My colleague believes that it is his tools that allows the organization to perceive the similarities in roles and commence an entirely new way of working together. For the first time they perceived their overlapping interests. Perhaps the difference between 'procurement' and 'supply chain' may not be so relevant in practice. What do these terms mean? Generally speaking, supply chain refers to the post-contractual phase, that covers logistical issues and matters relating to suppliers in the lower tiers (the suppliers of the suppliers). Procurement is often considered pre-contractual, regarding sourcing and negotiation. However, as companies have grown more advanced in their capabilities, the differences between these two practices have merged. Those that looking for new suppliers are thinking deeper into post-contractual realities and logisticians are beginning to feedback into those looking engaged in sourcing. The lack of oversight also reveals many other issues within organisational governance. Production departments are rarely plugged into supplier data. Vital inbound orders that are subject to unexpected delays are often only known by production teams when it is too late to enact contingency plans. The solution to many of these problems is touted as ‘cross-functional teams’. These teams collect representatives from a number of functions and departments in a single task-group to work on a shared objective. Previously, these were known as committees, but that this is almost a dirty word in businesses parlance and a more convoluted alternative replaced the term. The cross-functional teams are tasked to share information and co-ordinate activities between different departments. When managing organisations of 10,000 heads spread over dozens of countries, this can be a significant challenge. However, despite the name change, the groupings can often suffer from the same failings of committees: lack of direction, endless deliberation and a little responsibility. An alternative to the problem of the silo may be to drop the practice of dividing up supply chain and procurement functions as separate beasts. Perhaps this way we can encourage more cross-functional thinking by challenging the view of procurement and supply chain management as separate entities and begin to merge the two. Procurement and supply chain management may be different components in a wider value chain, but they speak to the same aim: managing third-party relationships in an efficient and effective manner. Means to create value for the organization may stem from supplier development or logistical re-engineering or perhaps a more collaborative approach to negotiation. The company bottom-line does not see the difference of who delivered the value, but how it was created. Procurement and supply chain should be considered part of the same operational focus. To understand one without the other is unenlightened at best and, at worse, may be a lost opportunity.
e105e9d03db5997da1f3405db1bcbd6b
https://www.forbes.com/sites/jwebb/2017/05/30/in-defense-of-outsourcing-the-commonplace-business-practice-has-improved-our-lives/?sh=597236f830c8
In Defense Of Outsourcing: The Commonplace Business Practice Has Improved Our Lives
In Defense Of Outsourcing: The Commonplace Business Practice Has Improved Our Lives Bank employees display slogans during a rally outside the Central Bank's main office to condemn the... [+] 12 year old circular by the Central Bank legalizing outsourcing and contractual hiring in the banking industry Wednesday, April 24, 2013, in Manila, Philippines. The protest led by the Alliance Bank and Financial Unions, demanded the circular not be implemented, which they alleged has resulted in retrenchment and the loss of bank employees' job security. (AP Photo/Bullit Marquez) Often, when a business faces a crisis, critics are quick to point to outsourcing as the cause. But does this commonplace practice deserve such negative press? Here are some reasons to think more positively about outsourcing. The British Airways episode recently cast in sharp relief contrasting views on outsourcing. Over the weekend, one of the airline’s data-centers was overwhelmed by a power surge. The systems were overloaded and technicians took days to rebuild the database. Meanwhile, thousands of flights were cancelled, 75,000 passengers were stranded and the company’s CEO was desperately firefighting before the world’s media. The press was quick to condemn BA’s cost-cutting and specifically it outsourcing as the chief contributory factor The GMB union attacked the airline’s history of sending British jobs overseas and denounced the senior management as “just plan greedy”. This evokes the widely-held belief that outsourcing equates evil. The notion, within the press, is rarely challenged. Some years ago, Procurement Leaders conducted consumer research on public attitudes towards supply chain management. In a survey of 300 consumers, based in Australia, UK and US, 62% agreed with the statement that “I think it is wrong when the branded company is not the manufacturer.” This perspective is sharply at odds with the current structure of global trade. I found only one voice in the sample that expressed a morally neutral attitude towards outsourcing: “Modern businesses sometimes are forced to outsource to save on cost and availability in order to sustain operation and that is understandable,” said one US-based respondent. Throughout the study, we found a surprisingly low tolerance for any sniff of outsourcing within production. This reflects the negatively that prevails in the press. Few positive outsourcing stories are reported in the media, but failures are splashed across the business pages. Even the Economist (among the most pro-business newspapers in print) has wondered whether outsourcing has gone too far. Business generally ignores this discussion and continues to outsource more functions. In the Arvato Outsourcing Index, for instance, Q1 2017 showed the strongest quarter of growth in the UK outsourcing market in five years. There are two problems with the word ‘outsource’. Firstly, it is often confused with ‘offshore’(and that is subject to a different conversation). Of course aspects of spend can be shipped to a third-party that is based overseas . We find that this is highly effective in cutting short-term costs. But this is not a necessity. There are number examples of companies outsourcing to a local provider. This is more of a function of the management seeking to shift focus on core activities. A second problem with the O-word is that is often seen as management-speak for ‘job cuts’. The practice is seen as a rapid means to shed costs under the guise of a third-party that has supposed ‘expertise’ in the area. Opportunists have used the tool to generate short-term wins, and winning a personal bonus on the success, and then moving on to the next contract before problems materialise. These failures seem to highlight poor implementation. Doing anything badly will result in a bad output. This speaks more of execution than the underlying idea. As with any business decision, managers must weigh pro against con. Outsourcing disrupts operations, can result in lost knowledge and can undermine the morale of staff. But there are significant benefits to be wrought as well. Outsourcing has dramatically transformed the global economy. Our jobs, consumption patterns and entire lifestyles are partly shaped by the rippling effects of past outsourcing deals. The low costs offered by outsourced contracts – vilified by the press at the time –streamlines the structuring of the economy. Apple, for instance, does not make any of its own products. It has outsourced production to companies based in China. This has allowed the California-based company to focus on enhancing its competitive advantage: design and user experience. Outsourcing also allows the firm to sell consumers high quality design at relatively low prices. Worrying about production lines, operational issues and logistical bottlenecks will detract from the creative atmosphere that the tech company currently enjoys. To convert it back into manufacturing will likely kill off Apple’s entrepreneurial and artistic spirit. It would be imprudent to attribute Apple’s success solely to outsourcing, but it has had a positive impact. In fact, if we look at any established company, its operating model has been fundamentally recast. 100 years ago, Irish brewer Guinness owned its own farms and made its own barrels. Now the global beer behemoth is considering outsourcing its brewery in Dublin as its brands enters more markets. The greater focus that outsourcing affords many companies can unburden them of operational drudgery and catapult them to international and enduring success. The benefits accrue to shareholders, but also to the workers who possess high-value jobs in companies in which they feel proud. Consumers can in addition enjoy lower prices. Moreover, the fragmented supply chain that necessarily results from outsourcing provides supporting jobs across the world from a workforce that is increasingly specialised and more productive. Outsourcing can cut jobs in one part of the economy, but it also generates work elsewhere that can create more value. Undeniably, there are examples of failure. This is true of any business practice. There is bad accounting, HR or team leadership, but project failures are not an indication to suggest that accounting itself is to blame. Rather, it’s the people that misapplied it. Similarly, with outsourcing, there are plenty of examples of success that we simply take for granted. But there are many reasons to defend its practice within business and some arguments to feel grateful as to the way it has enriched the lives of many.
ec88cab2cbbd13ed5e3aaa6d3d1b7422
https://www.forbes.com/sites/jwebb/2017/07/26/norway-to-launch-worlds-first-automated-container-ship-in-2018/
Norway To Launch World's First Automated Container Ship In 2018
Norway To Launch World's First Automated Container Ship In 2018 Shutterstock The world’s first automated container ship is currently being developed by shipping company Yara in partnership with Kongsberg, a maritime engineering group. Their ship ‘Yara Birkeland’ is set to launch in 2018. The boat’s capacity will exceed 100 containers. Although the project is scheduled to be completed in 2018, a phased period of a partially crew-operated service will continue until 2019. By 2020, the ship will be entirely automated and the bridge will be brought on land. Once the vessel enters this phase, it is expected that it will save 90% in costs through salary savings. It also has the potential to deliver substantial environmental costs. The boat has also been termed ‘the Tesla of the Seas’ because of its battery-operated engines. The firms believe that the craft will save 40,000 journeys a year in terms of carbon emissions. Norway has already gone some way to solve the easier problem of automated ferries. In January 2018, the country will open its first computer-operated ferry lanes, between Anda and Lote. The ferries will also be 100% battery powered, reducing carbon emissions to zero. A captain will be present upon the ship, supervising the journey and taking control of the final docking manoeuvres. The potential for automated logistics is enormous. Crafts continually navigate the world on the ocean’s shipping lanes. The job of piloting these vessels can be dull, leading to stressful working conditions for the crew. In potentially more engaging activities, specialized pilots are brought in to navigate the ship and crew through challenging straights. The long ocean-bound routes are generally undemanding and perhaps ripe for automation. In this regard, moving shipping containers may be considered similar to operating rail freight, as a job that is generally procedural and predictable. That said, ships, unlike trains, do no operate in a closed environment. The open seas are an unpredictable and even dangerous place. Only recently, we have seen an uptick of piracy along the Somali coastline. An unmanned boat may be an even more tempting proposition to disparate pirates along the lawless fringes of shipping lanes. As such, it appears that the current routes taken by these crafts will be relatively short hauls, traversing in politically stable routes. Over the longer term, we may see more varied solutions. As with pilots, security services can be flown in during the short duration of the dangerous zones. Alternatively, the lack of potential hostages may even reduce the interest in targeting these craft. Further into the future, say in twenty to thirty years, we may see the global supply chain entirely automated. Once the major operators, such as Maersk, see that automated shipping is a safe and efficient means through which deliveries can be made, we can expect to see rapid and significant investment. Once these automated vessels lower costs, we may see an entire industry transformed within a few decades.
72c0b94ea7fb24bb35d6a57f640d110a
https://www.forbes.com/sites/jwebb/2017/09/29/elon-musk-promises-a-transport-revolution-with-his-new-rockets-and-this-time-its-on-earth/
Elon Musk Promises A Transport Revolution With New Rocket (And This Time It's On Earth)
Elon Musk Promises A Transport Revolution With New Rocket (And This Time It's On Earth) ADELAIDE, AUSTRALIA - SEPTEMBER 29: SpaceX CEO Elon Musk speaks at the International Astronautical... [+] Congress on September 29, 2017 in Adelaide, Australia. Musk detailed the long-term technical challenges that need to be solved in order to support the creation of a permanent, self-sustaining human presence on Mars. (Photo by Mark Brake/Getty Images) Any two points on earth would be within reach in under an hour if Elon Musk is able to pull off the plans he laid out Friday. The billionaire believes a new reusable SpaceX rocket he intends to build -- the BFR -- has the potential to launch communities to Mars, as well as reshaping transportation closer to home. Travelling at 18,000 miles per hour, the rockets promise fast connections between key economic hubs. In the future, Musk believes that passengers will board rockets in New York, traverse 7,000 miles and land in Beijing 39 minutes later. 80 to 200 people can blast from Hong Kong to Singapore in 22 minutes. Los Angeles to Toronto in 24. Dubai will be reachable from London within half an hour. The earth-bound logistical potential for such speeds is significant. Does the airline sector have anything to fear from SpaceX’s rockets? The practical elimination of time-distances between key centres would render their business products obsolete. Undoubtedly, the price-tag for a rocket ride would be suitably astronomic, though Musk claimed in an Instagram coda that the cost would be no higher than a full-fare economy ticket. But there are opportunities for saving money elsewhere. A flight from the Eastern Seaboard to China will make a day-trip entirely plausible. The need for hotels and extra time to aid the traveler in recovering from jet-lag would also be unnecessary. Indeed the hospitality trade would also face a shrinking market as their corporate customers jet-home before they have chance to grow hungry. Such a potential was also brought about by initial introduction of airlines in the 1930s. Although ocean-bound cruises ship fewer passengers than in their heyday, the industry recast its boats as hosts for pleasure cruises. Perhaps the current aeroplanes will one-day redefine themselves as offering ‘slow’ journeys above the clouds, affording relaxing tourists a restful view of the skies. Time-poor businessman, on the other hand, will shuttle back and forth between distant offices on fast, fiery rockets. These earth-shattering revolutions are a side-note to Musk’s main ambitions: The creation of a multi-planet species hopping between home, Mars and other far-flung satellites. Although the rockets are designed to generate huge thrust, economy is at the heart of their specification. Engines are re-usable and built to consume modest quantities of expensive fuel. These features create a potentially profitable bases for inter-continental transportation. The ambitious objectives are characteristically bold by a Silicon Valley magnate, but Musk’s timelines and details are also typically hazy. He wants to propel humans to Mars by 2024, but intercity flightpaths are without deadline. If the busy entrepreneur builds such craft, the profits released from re-engineering the earthly economy could fund his dreams to explore the red planet. But that's a big if.
9020bf42682f19e4f55c9daadda65cf3
https://www.forbes.com/sites/kaeliconforti/2020/04/29/4-things-to-consider-before-you-do-the-australian-working-holiday-visa/
4 Things To Consider Before You Do The Australian Working Holiday Visa
4 Things To Consider Before You Do The Australian Working Holiday Visa Farmwork isn't the only thing you can do with a Working Holiday Visa in Australia. Getty File this under “Things to do when it’s safe to travel again.” While we are in the midst of truly uncertain and unprecedented times thanks to the COVID-19 pandemic, it’s never too early to start planning your next great adventure, despite the fact that it might be a year or two before you can do it. What exactly is the Working Holiday Visa? For 18-to-30-year-olds, it’s your ticket to working and traveling around Australia for up to one year—if this is the first time you’re hearing about it, check out my previous article for the basics. Here, I’ll go over some important things to think about if you’re planning to apply for Australia’s subclass 462 Working Holiday Visa once it’s safe to travel again. When to Apply As of right now, the same rules apply: you must fill out the application before you turn 31. Once you do that, you’ll have one year from the date you receive your acceptance email to enter the country. As long as you complete three months of specified work before you turn 31, you’ll be eligible to apply for a second year or even a third year if you complete six months of specified work during year two and are still young enough. Note that your WHV starts automatically the next time you enter the country, so don’t plan any other vacations to Australia until you’re ready, otherwise you may accidentally activate it and the clock will start counting down. Deciding Where You Should Live Fly into a major city like Sydney, Melbourne or Brisbane, and give yourself a few days to recover from jet lag, go sightseeing and see what the job situation is like. Stay in a hostel at first, as many can help with your initial job search and you’ll meet other travelers who are also trying to figure out where to go for work. Keep in mind that depending on where you’re heading—to another city or a remote town in the Outback—accommodation may be provided as part of your work arrangement, so always ask your potential employer if it’s part of the deal. Remote farms and pubs will sometimes provide accommodation, or at least steep discounts on shared dorms or other housing nearby if it’s not onsite. MORE FOR YOUThe Promise Of International Travel: April EU Travel Restrictions, Covid-19 Test Requirements, Quarantine By CountryPhotos: Egypt’s 3,400-Year-Old ‘Lost Golden City’ Is Unearthed From Desert SandsThe Flavor And Flair Of Fayetteville, A City On The Rise Hostels and Airbnbs are a good option, particularly if you’re working in more populated places, while house sitting is also worth looking into as there are generally listings all over Australia. Working for accommodation is another way to go, and typically means you’re making beds, cleaning rooms or tending to the front desk for a few hours a day in exchange for free or discounted nights. WorkAway and WWOOF take this idea a step further, acting as a barter system where you trade a certain amount of work or help around the house or farm for a place to stay for an agreed upon amount of time. If you’d rather rent an apartment, join some of the popular backpacker groups on Facebook—Australian Backpackers or any of the main backpacker pages for your city—to see local listings. Flatmates and Flatmate Finders can also be helpful if you’d rather not rent out an entire place yourself. How to Find Work Word of mouth is key, whether it’s in the literal sense (you know a guy who knows a guy), by way of job postings on bulletin boards in hostels or via a Facebook group with backpacker jobs in your area. In fact, it can be far more effective to print out several copies of your CV and hand them out to prospective employers in person, especially if you’re looking for work in tourism or at a bar or café, as managers love to put a face to a name and may even offer you a trial shift on the spot if they’re interested. Other jobs may be posted online via sites like Backpacker Job Board, GumTree, Seek or JobSearch, so apply through those as well. Seasonal job needs are also important. I worked as an usher at a performing arts center in Darwin through the dry season (May to September) and later, at Luna Park in Melbourne over the summer holidays (December to February), but applied a month before each so I’d have an edge over other job hunters. If you’re hoping to extend your WHV by doing farm work, this harvest schedule is handy for figuring out where to go depending on farming needs throughout the year. Get An Australian Tax File Number, Bank and SIM Card Whatever you do, make sure you get your Australian tax file number and do your taxes. While backpackers don’t necessarily need to file unless they make more than $37,000, you might be able to get some money back for uniforms, classes, certificates and other work-related expenses, so it can’t hurt. When you get hired, sign up for superannuation, or super, through your bank or employer so you can get some money back after you leave Australia. Open a checking account with a major Australian bank—the three largest are Commonwealth Bank, ANZ and Westpac—so you can get paid by your Australian employer. Having an Aussie SIM card and local phone number for employers to call is also important and the three largest phone carriers—Telstra, Optus or Vodaphone—provide the best coverage.
1abad5064851e0191bf3052247b66398
https://www.forbes.com/sites/kaifalkenberg/2011/11/22/one-great-idea-for-reducing-health-care-costs-provide-hospitals-at-home/
One Great Idea For Reducing Health Care Costs: Provide Hospitals At Home
One Great Idea For Reducing Health Care Costs: Provide Hospitals At Home Image by Getty Images via @daylife As part of Forbes.com's Human Ingenuity series, we ask staff writers, contributors and experts to weigh in with solutions to some of the nation's biggest problems.  This month's focus is on reforming health care.  We asked each to weigh in with one great idea for reducing costs while still providing high quality care.  This response is from Jason Hwang of the Innosight Institute, the non-profit think tank founded by Clayton Christensen. One of the core tenets of disruptive innovation is to use technology to transfer services from costly, expertise-intensive settings into more affordable, convenient, and accessible venues. It’s why Jiffy Lube can perform your oil change instead of only an authorized dealership, why you can do your own taxes instead of only a certified accountant, and why you’re reading this very article online instead of waiting for a monthly, printed publication to be shipped to your mailbox. In health care, most disruptions have focused on the simple end of the spectrum of care delivery, such as retail clinics that offer immunizations and manage straightforward cases like sore throats and earaches. As a result, a common criticism of the overall impact of disruptive innovation is that it can only ever address care delivery that comprises just a sliver of our total health care spending. However, there are indeed disruptive innovations that attempt to address much costlier challenges, like the rising costs of acute, hospital care. The Hospital at Home model that delivers hospital-level care to people in their own homes was first developed by the Johns Hopkins School of Medicine, but has since been implemented at several sites around the country. Presbyterian Healthcare Services in New Mexico manages the largest Hospital at Home program in the country, and has estimated its savings to be between $2,000 to $3,000 per case, while treatment time averages one-third shorter than equivalent cases admitted to the hospital. Clinical outcomes are equal or better, and, unsurprisingly, the program is rated highly by both patients and providers. Hospital at Home can address hospital bed shortages – especially critical in underserved areas of the country – in a flexible manner that doesn’t require huge public outlays for constructing new hospitals, and it provides care in a manner and setting that is typically preferable to hospitalization. The fact that Hospital at Home can lower the costs of the most significant component of health care spending – with hospital care representing nearly a third of our national health expenditures – underscores its importance and why it’s essential that we disrupt the entire spectrum of care delivery.
4286f366bb1ae41e5a07d7a98a418ef5
https://www.forbes.com/sites/kaifalkenberg/2011/11/28/one-great-idea-for-reducing-health-care-costs-bring-back-house-calls/
One Great Idea for Reducing Health Care Costs: Use Technology to Bring Back House Calls
One Great Idea for Reducing Health Care Costs: Use Technology to Bring Back House Calls As part of Forbes.com's Human Ingenuity series, we ask staff writers, contributors and experts to weigh in with solutions to some of the nation's biggest problems.  This month's focus is on reforming health care.  We asked each to weigh in with one great idea for reducing costs while still providing high quality care.  This response is from David Chase, CEO of the Patient Relationship Management company Avado, and founder of Microsoft's Healthcare business. Image via Wikipedia Marcus Welby is Dead. Long live Marcus Welby As an earlier contributor (Dr. Jason Hwang, co-author of the Innovator's Prescription) stated, one of the core tenets of disruptive innovation is to use technology to transfer services from costly, expertise-intensive settings into more affordable, convenient, and accessible venues. Doctors from Seattle to South Carolina are demonstrating exactly that by removing the 40% "insurance bureaucracy tax" from healthcare. You might think of it as "two parts Marcus Welby and one part Steve Jobs." The results have been staggering with a 40-80% reduction in the most expensive facets of healthcare (surgeries, scans, specialist and ER visits). Primary care physicians consistently state that 2/3 of patient office visits could be done remotely via phone or email, but they are disincented from doing this with the convoluted reimbursement models of present day health insurance. If doctors don't see the whites of your eyes, they won't get reimbursed so the result is people taking half their day to get to/from their doctor's office, sit in a waiting room, wait in the exam room and then get 7 minutes with their physician not to mention dealing with all the billing hassles. Instead, modern day Marcus Welbys are available via electronic means. For example, one physician operating in this new old model shared how he hasn't seen a patient with Shingles in five years. His patients simply take a picture of their symptoms with their smartphone and email it to him to verify that they have Shingles. In a few minutes, he can order a prescription and everyone has saved time and money rather than waiting days to get into their doctor. After all, what location is more convenient than one's home? Some DPC providers such as WhiteGlove Health and Organic Medicine Now take it a step further and make house calls that harken back to the days of your family doctor stopping by your house just a few decades ago. Beyond that, they provide simple, yet sophisticated, technology tools for further patient convenience and savings that allow them to cut administrative overhead by 80% compared to a typical medical practice. Rather than patients filling out mountains of paperwork for what seems like the hundredth time, all patient-provider interactions can be done at the patients' convenience whether it's filling out forms, scheduling appointments or requesting medication refills. Marcus Welby wouldn't recognize mainstream medicine today. In Welby's time, we didn't insure the equivalent of a car tune-up but we do that in most health plans today. Nor did we have insurance middlemen who add no value sitting in between you and your family doctor. Two related items should be done to save individuals, businesses and government a huge sum of money on healthcare. Make primary care more accessible IBM and other large employers have studied healthcare costs around the world. IBM alone spends roughly $2 billion per year on healthcare. The findings of their studies came to a surprisingly simple conclusion where countries were getting the best bang for the buck from their healthcare spend: More primary care access led to a healthier population which, in turn, led to less money spent. MedLion, profiled as The Most Important Organization in Silicon Valley No One Has Heard About, has shown they can deliver high quality care with prices that are affordable for low-income workers. This model is referred to as Direct Primary Care (DPC) or Direct Patient Centered Medical Homes (D-PCMH). As we've seen in Massachusetts, health reform exacerbated a shortage of primary care physicians. Even before that, half of primary care physicians said they would leave medicine if they could. The biggest reason was not being able to practice medicine the way they were trained as a result of insurance-driven productivity goals. A secondary reason was monetary -- primary care physicians are the lowest paid physicians. A nice byproduct of DPC practices for the primary care physicians is they can practice medicine the way they were trained and by cutting out the 40% "insurance bureaucracy tax" they are taking home significantly more income. Imagine if we scaled models such as MedLion's and other DPC pioneers such as Qliance nationally. It would be a boon for primary care providers who want to operate free of insurance, as it validates a model that has proven to yield better health outcomes while lowering costs dramatically. Fortunately, one of the least known elements of the new health law may be the most important. It’s the DPC provision allowing the separation of insurance from day-to-day healthcare to save 40% off the cost of primary care.  See Health Insurance’s Bunker Buster for more. Showing the bipartisan support for this element of the health reform, a GOP Representative who fought against reform and is an MD has proposed a bill to utilize the DPC model with Medicare recipients.  This leads into the next item. Demand a standard wrap-around insurance policy For Direct Primary Care (DPC) to work best, it is paired with a wrap-around insurance policy to cover non-primary care items. Widespread use of the DPC model would give insurance companies something that can allow them to underwrite a wrap-around policy to complement what is being delivered via the DPC package. This would accelerate the development of independent DPC practices as long as they offered the same baseline services (they are free to add things above that to differentiate their service).  National scale is critical, as insurance companies can’t underwrite for something that is wildly variant. This gets health insurance back to its strength and what insurance is so good for – rare stuff you hope never happens versus insuring the equivalent of a car tune-up. In summary, a couple simple steps can save hundreds of billions and save primary care in this country which many view as a dying profession.  The only downside to this plan is some healthcare providers are resistant to change, but this is a small price to pay for reviving the economy and putting the country on a path where healthcare costs won’t bankrupt the nation.
8a0f5f45e50046a5b4653652368f7175
https://www.forbes.com/sites/kaifalkenberg/2012/07/14/kerry-kennedy-was-likely-sleep-driving-just-like-her-cousin-patrick/
Kerry Kennedy DUI Arrest Likely Caused by Sleep Driving -- Just Like Cousin Patrick's Capitol Hill Crash (Updated)
Kerry Kennedy DUI Arrest Likely Caused by Sleep Driving -- Just Like Cousin Patrick's Capitol Hill Crash (Updated) Zolpidem (Photo credit: Wikipedia) Kerry Kennedys DUI arrest in a Westchester County, NY highway crash yesterday morning was likely caused by Ambien-induced sleep driving -- the same drug responsible for former U.S. Commerce Secretary John Bryson's crashes in L.A. last month leading to his cabinet resignation.  The details of Kennedy's crash have all the hallmarks of sleep driving -- the bizarre but disclosed side effect which causes users of Ambien to get out of bed and drive their cars while still asleep with no memory of their actions.  It occurred in the early morning, likely just hours after she took the drug.  She continued driving even though she had a flat tire.  She was disoriented.  She remembers nothing of the incident.  And just like in John Bryson's case, after Kennedy stopped the car at the bottom of an exit ramp, officers found her slumped behind the wheel.While Kennedy reportedly told officers on the scene she'd taken Ambien, spokesman Ken Sunshine later denied she had drugs or alcohol in her system. Sunshine has not yet commented on whether Kennedy may have been sleep driving. Like Bryson, reports on the Kennedy incident now claim a seizure may have caused it.  That's despite warnings at the top of each Ambien medication guide advising users that taking it "may [cause you] to get up out of bed while not being fully awake and do an activity that you do not know you are doing [including] driving a car ('sleep-driving')."  Many users refuse to believe this side effect can happen to them despite thousands of reported incidents in criminal dockets across the country and in the FDA's Adverse Event Database. Ironically, it was her cousin, former Congressman Patrick Kennedy, that first brought public attention to the problem of Ambien-induced sleep driving in 2006 when he crashed his Mustang convertible into a capitol hill barrier at 2 am telling officers he was late for a vote.  Kennedy had gotten out of bed after taking Ambien and an anti-nausea medication. Around the time of Patrick Kennedy's incident came a class action agains the drug maker complaining of another curious side effect: sleep eating.  Plaintiff's lawyer Susan Chana Lask cited examples of clients gobbling strange things after partially waking up in the middle of the night -- raw eggs, including the shells, and buttered cigarettes. In the wake of the class action, and more than a dozen officially reported incidents of sleep driving, the FDA required the drug makers to revise the drug's label.  It now warns the 39 million people who take Ambien that the drug can cause them to eat, have sex or drive without knowing it and with no memory of their conduct.  But it makes no mention of the legal ramifications that users like Kerry Kennedy face if they're among the unlucky ones to suffer this purportedly rare side effect. (Ambien, made by French drug maker Sanofi, had peak annual revenues of $2.2 billion in 2006, the year before it went, according to IMS Health.) Defendants in drug-induced legal predicaments like Kennedy have begun invoking a novel legal strategy: the Ambien defense.  Citing the FDA-mandated label, they've argued that sleep driving is a side effect not a criminal offense. Kennedy's likely to get a fairer shake asserting the Ambien defense than most.  Others defendants have had mixed results. In some cases, it has worked, saving defendants from serious jail time in cases involving vehicular assault and manslaughter.  This week, 45 year old flight attendant Julie Ann Bronson faces sentencing for a vehicular assault charge resulting from a 2009 Ambien-induced sleep driving incident in San Antonio, Texas.  Bronson drank several  glasses of wine during the evening before later taking an Ambien which the drug's label warns against.  She pled guilty and faced ten years in prison for crashing into a family of three and severely injuring an 18 month old girl.  Bronson says she recalls taking an Ambien before going to bed and then waking up in a holding cell in her pajamas and barefoot.  "A lady told me I'd assaulted a woman and a child," Bronson testified.  "I'd never hit anyone in my life.  It was surreal.  It was like a bad dream."  The jury believed she didn't intend to get into her car and in a ruling last month, gave her probation instead. Bronson isn't the first to avoid jail following sleep driving related fatalities.  In 2006, Ki Yong O, a 36 year old lawyer from Andover, Mass. killed Anthony Raucci in an Ambien-induced sleep driving crash.  In November 2007, a judge acquitted O of vehicular homicide ruling he couldn't conclude "beyond a reasonable doubt" the defendant "was voluntarily intoxicated when he operated his motor vehicle."  Two years later a Fresno, Calif. jury acquitted Donna Neely, 56, of similar charges resulting from a crash that killed Cho Thao Her, a mother of 11 children. Others have had less success with the Ambien defense.  Josh Shortt, a Loudoun County, Virginia firefighter and cop, was convicted for DUI in 2008 following an Ambien-induced sleep driving crash.  He appealed the conviction -- which cost him his law enforcement career -- all the way to the U.S. Supreme Court -- which declined to hear the appeal in March. In many cases, like Josh Shortt's, judges and prosecutors have found the notion of sleep driving inherently implausible despite FDA recognition that it can -- and has -- happened.  Though the FDA recommended drug makers investigate how and why it happens -- no studies have been done.  That may explain why despite the label change, physicians and patients continue to underappreciate the risk of it occurring. In a forthcoming article in Marie Claire magazine, I'll be chronicling the Ambien-induced sleep driving phenomenon and the devastating impact the nation's most widely prescribed sleep drug has had on many unsuspecting users.
a5202e6f0decf871151e92d1ce293c71
https://www.forbes.com/sites/kaifalkenberg/2013/01/02/why-rating-your-doctor-is-bad-for-your-health/
Why Rating Your Doctor Is Bad For Your Health
Why Rating Your Doctor Is Bad For Your Health SUFFERING FROM A TOOTHACHE, a South Carolina woman headed to her local emergency room a few months ago. The doctor there responded by administering Dilaudid, a powerful intramuscular narcotic typically reserved for cancer-related pain. Why, his nurse queried, was he killing a flea with a sledgehammer? Afraid of malpractice? No, the doc replied, Press Ganey. "My scores last month were low." Press who? The little-known company has become a hated target of hospital physicians, outstripping even trial lawyers. Utter its name in an emergency room and you'll likely unleash a cloud of four-letter words. Based in South Bend, Ind., Press Ganey is the nation's leading provider of patient satisfaction surveys, the Yelp equivalent for hospitals and doctors, and a central component of health care reform. Over the past decade the government has fully embraced the "patient is always right" model--these surveys focus on areas like waiting times, pain management and communication skills--betting that increased customer satisfaction will improve the quality of care and reduce costs. There's some evidence they have. An ObamaCare initiative adds extra teeth, to the tune of $850 million, reducing Medicare reimbursement fees for hospitals with less-than-stellar scores. Accordingly, hospitals kowtow to Press Ganey. In November nearly 2,000 administrators spent $1,100 or more each to attend Press Ganey's glittery client conference--a closed-to-the-public affair in Washington, D.C., with keynotes by Jeb Bush and astronaut Mark Kelly and his wife, former congresswoman Gabby Giffords. Press Ganey is helping hospitals fulfill their mandated obligation. Some have taken an extra step, tying physicians' compensation to their ratings. That may sound like a good thing. Why shouldn't you grade the quality of your medical care, the way that you pass judgments on other services, whether hotel stays via TripAdvisor or contractors via Angie's List? The short reason: The current system might just kill you. Many doctors, in order to get high ratings (and a higher salary), overprescribe and overtest, just to "satisfy" patients, who probably aren't qualified to judge their care. And there's a financial cost, as flawed survey methods and the decisions they induce, produce billions more in waste. It's a case of good intentions gone badly awry--and it's only getting worse. FOR ALL THE DOCTORS OUT THERE CARPING about these surveys, a message from Press Ganey CEO Patrick Ryan, a veteran health care executive: Suck it up. "Nobody wants to be evaluated; it's a tough thing to see a bad score," he says. "But when I meet with physician groups I tell them the train has left the station. Measurement is going to occur." But what exactly are Press Ganey and its two main rivals, the Gallup polling company and the publicly traded National Research Corp., measuring? Customers know what they want when they review spaghetti carbonara for Zagat. But giving patients exactly what they want, versus what the doctor thinks is right, can be very bad medicine. Last February researchers at UC Davis, using data from nearly 52,000 adults, found that the most satisfied patients spent the most on health care and prescription drugs. They were 12% more likely to be admitted to the hospital and accounted for 9% more in total health care costs. Strikingly, they were also the ones more likely to die. Why? The UC Davis authors posit that the most satisfied patients have a higher mortality rate because they receive more discretionary services--interventions that carry a risk of adverse effects. Even routine screenings for diseases like prostate cancer can lead to unnecessary drugs and operations with allergic reactions and surgical complications that leave patients worse off. (While the report controlled for age and health status, critics have challenged its methodology and claimed its findings are overstated. But other studies also confirm that patient satisfaction is not always a reliable index of good care.) "Numerous studies have found that patients are consistently highly satisfied with one of the most common downsides of medical care--false-positive test results and the downstream events that follow," wrote Dr. Brenda Sirovich of the VA Outcomes Group in White River Junction, Vt., commenting on the UC Davis study. "Almost any unnecessary or discretionary test has a good chance of detecting an abnormality." Such testing "is a double-edged sword," explains Dr. H. Gilbert Welch in his 2011 book, Overdiagnosed, often leading to "the detection of abnormalities that are not destined to ever bother us." Our health care system already suffers from a "more is always better" fallacy. "Practicing physicians have learned--from reimbursement systems, the medical liability environment and clinical performance scorekeepers--that they will be rewarded for excess and penalized if they risk not doing enough," says Sirovich. An overreliance on patient surveys, she says, only inflames the problem of overtreatment. Press Ganey's Ryan points the finger elsewhere: "If there's anything going on that's driving somebody to test more, it's the fear of malpractice." For the past few decades that's certainly been true. Money drives professional behavior for doctors, as it does for virtually everyone else, and the soaring insurance premiums that come with a malpractice suit have surely affected decision making. But as hospitals and other employers increasingly tie physicians' compensation to patient wishes, doctors are pushed even further down the dangerous path of overtreatment. Nearly two-thirds of all physicians now have annual incentive plans, according to the Hay Group, a Philadelphia-based management consultancy that surveyed 182 health care groups. Of those, 66% rely on patient satisfaction to measure physician performance; that number has increased 23% over the past two years. THE MATH IS NOW SIMPLE FOR DOCTORS: More tests and stronger drugs equal more satisfied patients, and more satisfied patients equal more pay. The biggest loser: the patient, who may not receive appropriate care. "By creating a monetary incentive to increase patient satisfaction, the government is not only increasing its expenses but promoting a metric that significantly increases death rates," says William P. Sullivan, an emergency room doctor in Spring Valley, Ill. He's a long-time critic of patient surveys. Another emergency room doctor in Columbia, S.C., who requested that we withhold his name, walked us through how this plays out in real life. Between 5% and 7% of his compensation--some $10,000--is dependent on high Press Ganey scores. So when the family of an elderly woman insisted that she be admitted to the hospital after stroke-like symptoms, he agreed to do so, even though her test results were negative and he wanted to send her home. "Her family refused, and they told me so," he recalls. "Do I call security and escort them out? I was more concerned with them giving me a bad patient-satisfaction survey score than her going home and having a stroke," which he considered highly unlikely. In admitting the patient, he exposed her to hospital-borne infections and, worse, a hefty bill for, as he puts it, "us doing nothing since she didn't satisfy the criteria for admission." These ethical dilemmas are playing out across the country, with the need to please customers often trumping their health. In a recent online survey of 700-plus emergency room doctors by Emergency Physicians Monthly, 59% admitted they increased the number of tests they performed because of patient satisfaction surveys. The South Carolina Medical Association asked its members whether they'd ever ordered a test they felt was inappropriate because of such pressures, and 55% of 131 respondents said yes. Nearly half said they'd improperly prescribed antibiotics and narcotic pain medication in direct response to patient satisfaction surveys. One emergency room with poor survey scores started offering Vicodin "goody bags" to discharged patients in order to improve their ratings. And doctors face the reality that uncomfortable discussions on behavioral topics--say, smoking or obesity--come with the risk of a pay cut. "The challenge is how do we discuss this with the patient so the patient doesn't leave unhappy," addiction specialist Dr. Aleksandra Zgierska recently told the AMA's American Medical News. "Saying yes is easy." OVERTREATMENT IS MORE THAN A SILENT KILLER. It's cripplingly expensive. Drill down to almost any procedure that keeps skittish patients happy and the price tag is enormous. Overused prostate cancer screenings? At least $3 billion a year. Unnecessary antibiotics? Another $1 billion annually--with the added harm of creating drug-resistant bacteria. All told, overtreatment accounted for up to $226 billion in 2011, for things like unnecessary procedures and prescriptions that don't help patients. That's according to Donald M. Berwick, the former administrator of the Centers for Medicare & Medicaid Services (CMS), which oversees those programs. Another $55 billion a year is directly tied to the abuse of prescribed opiates. Ironic, since government-mandated surveys were supposed to cut medical costs. Until the government got involved, surveying patients was a sleepy niche business. Then, in 2002, CMS announced a national program to survey patients and require public reporting of the results. The move was part of a Bush Administration initiative to improve accountability and public disclosure and to empower patients to make more informed decisions about health care. Instead, that move empowered Press Ganey. When it was founded in 1985 by an anthropologist and a sociologist from Notre Dame, just a handful of hospitals routinely asked patients if they were happy with the care they received. The practice expanded gradually by the late 1990s. The federal mandate transformed a voluntary expense into a compulsory one, increasing demand exponentially. Hospitals had to turn to companies like Press Ganey, which administers the federal survey for them and rates other units not covered by the mandate, like the ER. Investors have also reaped the rewards. Press Ganey was taken private in 2003 by American Securities, a New York private equity firm, for a reported $100 million. Four years later it was flipped to another private equity outfit, Vestar, for a reported $673 million. Since then revenue at Press Ganey has grown at high single digits; it earned $82 million (Ebitda) on $217 million in sales in 2011. ObamaCare's "pay-for-performance" program is providing yet another boost. Starting last October, hospitals that perform poorly on quality measures forfeit 1% of their Medicare payments, a number that doubles by 2017, putting some $2 billion at risk. Thirty percent of that determination will be based on the hospital rankings from mandated patient surveys. That means, in some cases, hospitals are throwing money at things like new elevators and valet parking. It means doctors will be under yet more pressure to give their customers what they want. And it means ever more clout for operations like Press Ganey. It processed 70 million patient surveys last year from 10,000-plus health care organizations and half of all U.S. hospitals. And it stands to gain even more from groups like the Cleveland Clinic, which already spends $500,000 a year on government-dictated surveys. JUST HOW RELIABLE ARE THOSE SURVEYS? Many doctors question their validity, starting with the sample size. Given that physicians are often judged on a handful of survey responses, though they see hundreds of patients, it seems crazy to tie their compensation to misleading results. Press Ganey admits that survey sample sizes sometimes are too small and says a minimum of 30 responses for an ER is necessary to draw meaningful conclusions from its data. But William Sullivan, the Illinois ER doctor, says that Press Ganey reports monthly results to his hospital even when there are as few as eight to ten surveys. His department has ranked in the first percentile one month and in the 99th percentile two months later. "Response rates have been dramatically declining over the past decade," says Paul Alexander Clark, founder of SmartPatient, a health care analytics company. He should know: Until 2007 Clark was in charge of Press Ganey's patient-satisfaction improvement group. The response rates, he says, are now "too low to produce reliable results." Insiders have known this for a decade. "This is a dirty little secret in our industry," a senior Gallup executive wrote in a 2002 letter to the CMS chief. "At those levels the standard rules of probability don't exist. ... This means you may or may not be tracking real patient attitudes." CMS declined several requests to comment on the record. Press Ganey says response rates are high enough to provide "scientifically valid results." Why not simply raise the response rate or increase sample size? Not so easy for an industry that largely still relies on inefficient mail and phone surveys. "It's a very expensive proposition to mail surveys, and it's a very labor-intensive proposition to have call centers to call people," explains Clark. And regardless, physicians complain that patients who do respond are a self-selecting group, either extremely happy with or furious at their doctors. But Press Ganey's Ryan says there's no proof that "only the angry people respond." Many complicating factors, say Clark and others, further taint survey results, including geographical, cultural and racial differences among patients. Community-based hospitals in the Southeast generally rate far higher than large academic hospitals in the northern part of the country. One Cleveland Clinic study evaluating survey bias found that no U.S. hospital with 500-plus beds has scored in the top tenth percentile when it comes to basic communication by doctors and nurses. What's the right way to embrace a patient's point of view? Experts like Josh Fenton, an author of the UC Davis study, advocate aligning incentives with evidence-based care, but that's easier said than measured. He also suggests encouraging doctors to modify patient expectations. "Handing them a Z-Pac for a cold may make a patient very satisfied, but it costs our health care system $100 and puts the patient at risk of side effects." Far better to explain why that's not in a patient's best interests. Again, easier said than done (especially for time-pressed physicians): You can tell people to eat their vegetables all you want; they're still going to remember more fondly the person who gave them a slice of cake. Others are trying to disrupt the process altogether. Palo Alto, Calif. plastic surgeon Steven Bates was so fed up with the inefficiencies, low response rates and expense of existing survey vendors that he and a few colleagues teamed up to launch DocsVox. Instead of relying on paper and phone calls, its platform integrates e-mail, social media and mobile technology to manage doctor-patient relations. "I want to see a system that works," says Bates, "generating a robust number of patient responses using technology that patients like and is easy to use." DocsVox, which has raised two undisclosed rounds from friends and family, is beta-testing the platform with Bay Area-based clients. Press Ganey is already moving the business of surveying doctors well past emergency rooms and hospital visits. The widening government mandate provides a push. "The hospital is going to need to see your experience across the spectrum of care," says Press Ganey's Ryan. Next stop: clinics and doctors' offices across America.
312778724f82e61da42a65b638ba161d
https://www.forbes.com/sites/kaipetainen/2013/11/03/hard-work-and-little-sleep-how-st-lawrence-students-pitched-nq-mobile-at-ross-stock-pitch-competition/
Hard Work And Little Sleep: How St. Lawrence Students Pitched NQ Mobile At Ross Stock Pitch Competition
Hard Work And Little Sleep: How St. Lawrence Students Pitched NQ Mobile At Ross Stock Pitch Competition Imagine for a moment that you’re a portfolio manager, or an analyst that covers NQ Mobile .  Muddy Waters comes out with a report that accuses the company of fraud.  The stock drops.  How does that analyst or that portfolio manager react?  Students at St. Lawrence University were faced with this exact situation and how they reacted to it, is a fantastic example of teaching with real situations. Evan Pete Walsh, Andrew Chan, Vasileios Prassas and Justin Champlain hard at work in the Tozzi... [+] trading floor at Ross. Photo credit: Kai Petainen Each year, the Ross School of Business holds an undergrad stock pitch competition.  Students from some of the top business schools come and pitch a stock.  Students prepare for weeks, as they get ready for the actual stock pitch.  Financial models are made, comparable analysis is performed and students from different schools dig into the companies and present their ideas in front of a panel of judges.  The winner of the competition wins $3,000.  But, just like in real life, students sometimes face an incredible obstacle to their stock pitch.  Occasionally, a company will announce earnings just before the stock pitch, and the company might blow past their target price.  Or, the company has a bad earnings announcement, and the stock plummets just before the students pitch it.  This can create a true ‘horror story stock pitch’.  This sort of event happens in real life as well, as analysts and portfolio managers are forced to re-evaluate their thoughts on the stock. UPDATE:  The students found out that Longbow Research disclosed an 8% stake in NQ.  Ironically, Longbow Research was also a judge at the competition. Well, that’s exactly what happened to the students from St. Lawrence University (SLU).  They were ready to present NQ Mobile at the competition.  They spent 5 weeks working on the stock pitch and with one week to go before the competition, they were ready to pitch the stock.  Muddy Waters came out with their report and the stock dropped.  What would the students do?  The competition was only a few days away.  The students couldn't pitch NQ as a short, as the competition was only focused on long pitches.  Would they pick another stock? So, I asked Evan Pete Walsh, Vasileios Prassas, Justin Champlain and Andrew Chan from St. Lawrence University for their thoughts, and this is what they had to say: On Thursday (two days before the competition) we scheduled a meeting with the investment committee of the Board of Trustees here at SLU in order to get feedback on our presentation. One hour before the meeting, I checked my phone and saw a nearly 50% drop on NQ. I rushed to get the team together, find out what happened, and come up with something to say to the trustees. We finally decided to present the initial presentation first, and tell them about the Muddy Water news later. We all had panicked initially, but we instantly realized the incredible opportunity we had on our plate. We would go to Michigan with one of the best special situations opportunity in the market. Our meeting with the Trustees ended up lasting 3 hours and resulted in great feedback and ways to overcome this barrier. After the meeting we got together and worked on figuring out what to do until 2am that night, when we left for the airport sleepless. We initially thought of pitching a convertible arbitrage opportunity with the convertible bonds that NQ issued on October 9th, but bonds were not allowed in the competition. On our way to Michigan we tried to stay in tune with management announcements and company updates, while we kept discussing how to approach the pitch. Long story short, we got to Michigan and had no time to talk about our company because of all the activities and speakers we had to attend in Ann Arbor. It was 9pm, after the Friday night dinner, and that was when we decided to restructure our whole presentation and finally pitch NQ as a long recommendation. We decided to take advantage of our deep dive research and construct counter-arguments to the Muddy Waters report, which we did. We got back to the hotel after the dinner and worked on putting the pitch together until 2am when we decided to go to bed. We slept for three hours and woke up at 5am to polish it up and rehearse it as much as possible. Due to our due diligence and knowledge, we never lost faith in NQ being an exceptional company with amazing potential. We rode the bus to the Ross School of Business smiling and discussing how crazy our experience had been. We were not stressed.  We had the best story to present, great presentation skills, and an abundance of knowledge on the company. Throughout the whole time we had an adrenaline rush going through our veins and we were certain that we would leave our mark there. Evan, Vasileios, Justin and Andrew – I believe you left a mark.  I’m going to remember you.  This team from St. Lawrence was faced with an incredible obstacle to their stock pitch and they worked hard through it.  Most stock pitches don't face this sort of challenge, and this team didn't give up.  Hard work and very little sleep – this was a true example of a real life situation for an analyst or even a portfolio manager.  Regardless of the outcome for NQ in the long run, the students worked hard and reacted quickly to the situation that was placed before them.  I must admit, this is going to be an amazing story and learning experience for their job interviews. Kai Petainen's views on the market and stocks are his alone, and do not reflect the views of the Ross School of Business or the University of Michigan.  Kai teaches a class on quant screening, F334 -- Applied Quant/Value Portfolio Management, at the Ross School of Business. Kai is a MFolio master at Marketocracy, and is featured in Matthew Schifrin’s book, "The Warren Buffetts Next Door".
141f1914e146cba8ba4bc48e14399908
https://www.forbes.com/sites/kaitlynmcinnis/2020/07/26/private-air-travel-has-increased-by-75-amid-coronavirus-pandemic/?sh=429832e7204f
Private Air Travel Inquiries Have Increased By 75% Amid Coronavirus Pandemic
Private Air Travel Inquiries Have Increased By 75% Amid Coronavirus Pandemic Air Charter Service While many Americans continue to spend the summer discovering their own backyard or packing up the car for socially distanced road trips, other American travelers have opted to indulge in a more premium form of domestic travel. According to Air Charter Service, inquiries for private air travel has reportedly increased by 75% year-over-year during the spring and summer months, noting that while commercial flights are still struggling to return to some sense of normalcy, private air travel already places health, personal space, and hygiene as top level importance. “We have been arranging private jet travel for existing clients as well as new clients who are looking for a better way to travel than commercial and more peace of mind,” said Richard Thompson, president of Air Charter Service Americas. “With the current pandemic, those who normally travel in first class commercially are elevating their experience with personal well-being in mind, especially for older clients who have more health concerns.” Air Charter Service While commercial airlines are working to improve their health and safety measures, the allure of a privately chartered flight during a global pandemic is hard to overlook: not only will travelers avoid busy airports, they’re also able to easily stay within their ‘bubble’ of close family and friends while planning for a socially distanced domestic vacation. MORE FOR YOUSaudi Crown Prince MBS Pressed The Louvre To Lie About His Fake Leonardo Da Vinci, Per New DocumentaryWhy Vaccine Passports Would Be More Popular If We Just Called Them Something ElseDon’t You Dare Protect Your Summer Vacation Like This “It’s a more streamlined process to get from A to B, whether that’s for business or leisure,” explained Thompson, adding that, “the greatest number of private aircraft users are North Americans and half of our calls are from new customers.” The appeal of a privately chartered flight is especially relevant for multi-generational families or smaller groups of friends that have chosen to isolate together while traveling to outdoor spaces, including wilderness areas or exclusive islands. According to Thompson, some of the most popular destinations include Jackson Hole, Wyoming; Bozeman, MT and Aspen, Colorado, in which many travelers own a vacation home or plan to spend an extended vacation outside of the city, which makes chartering a one-off private flight a much more accessible solution for socially distanced travel.
61604a1a99be01132cac39d9e4b5a1ea
https://www.forbes.com/sites/kaitlynmcinnis/2021/01/27/the-boat-rental-marketplace-had-a-record-breaking-2020-despite-covid-19/?sh=32eb6a184d90
The Boat Rental Marketplace Had A Record-Breaking 2020 Despite Covid-19
The Boat Rental Marketplace Had A Record-Breaking 2020 Despite Covid-19 getty While the coronavirus impacted—and continues to impact—the overall travel and tourism industry as a whole, as it turns out, the boat rental marketplace has seen a significant uptick in bookings throughout 2020 and into the new year. According to Boatsetter, the world’s leading peer-to-peer boat rental marketplace, hours on the water and rental inquiries were significantly up over the past months with 120,000 hours spent on the water in 2020 alone. The “Airbnb for boats” ranges from everyday sail boats to luxury yacht charters in over 600 locations across the country—making for a safe and easy staycation without sacrificing luxury appeal and comfort. “At Boatsetter, everyone is passionate about our mission to make boating accessible to anyone, anywhere. We enable people to connect on the water in a way that they never thought before, during this pandemic, between boat owner and renters, captains and crews, we make unforgettable experience possible in the changing world,” explains Jaclyn Baumgarten, Boatsetter CEO and co-founder. Luxury yacht in marina at night. getty “2020 has been a challenging year for all of us, we’ve delivered over 15,000 memorable trips on the waterways around the world for more than 60,000 happy customers, to come together safely. We are well poised to achieve the exponential growth in the $50 billion recreational boating industry, and connect more people on the water, enjoying the experiences that last a lifetime.” MORE FOR YOUTravel Check: This Is The World On CovidFrance Will Allow Vaccinated American Travelers To Visit With ‘A Special Pass’The Promise Of International Travel: April EU Travel Restrictions, Covid-19 Test Requirements, Quarantine By Country Interested in booking your own luxury staycation on the water? Bookings for 2021 are currently open. Those wishing to use the platform to rent out their personal boats can also find more information at the official Boatsetter website.
719ea6a18800d7389bc6750867c50a30
https://www.forbes.com/sites/kaitlynmcinnis/2021/02/26/a-brief-guide-to-navigating-pool-season-in-las-vegas/
A Brief Guide To Navigating Pool Season In Las Vegas
A Brief Guide To Navigating Pool Season In Las Vegas Excalibur pool and cabanas MGM Resorts After a long, isolating winter, spending a little time by the pool doesn’t sound too bad right now. And if you’re lucky enough to be planning a road trip to Las Vegas, you can count on pools being open as soon as March 1. “While people go to D.C. for cherry blossom season and Colorado for ski season, travelers come from all over the world to be a part of the Vegas pool season,” said Ari Kastrati, Chief Hospitality Officer at MGM Resorts International. “The city is known for its hot summer weather and its cool pool environments. Our teams are ready to help guests escape and enjoy a much-needed respite.” While international travel is still largely frowned upon as we go into almost a year of battling the ongoing pandemic, poolside lounging is one of the easiest ways to enjoy a little luxury getaway without having to sacrifice social distancing or other CDC guidelines. Whether you’re looking to soak up a day at the beach, a spa-like escape, or you just want a rooftop getaway with your partner in crime, we’ve rounded up some of the best pools in Sin City—for all types of poolside loungers. Bellagio Bellagio Cypress Pool Fountain MGM Resort MORE FOR YOUPhotos: Egypt’s 3,400-Year-Old ‘Lost Golden City’ Is Unearthed From Desert SandsHarry Potter New York Slated To Open On June 3A Flight Just Set A Record For Positive Covid-19 Cases — Here’s Why That Will Not Happen In The U.S. Best for: Romantics Bellagio’s Mediterranean pool invites guests to a soothing dip in refreshing azure waters. The property includes an exquisite garden vista café, massage treatments, and the best part, an and an adults-only pool with premium chaise lounge chairs. Vdara Hotel & Spa Best for: Those who love a good view Looking for a luxurious rooftop getaway? The Pool & Cabanas at Vdara offer private retreats and cabanas with semi-private plunge pools—but the stunning views of neighboring ARIA architecture really takes the cake. MGM Grand MGM Grand lazy river MGM Resorts Best for: Fun lovers Whether you’re looking for a family-friendly adventure in the sun or you’re someone who prefers to make a splash rather than lounge in a chaise all day, MGM Grand’s 6.5 acres of waterside fun is for you. Featuring four pools, three whirlpools, waterfalls, and even a lazy river, there’s tons of room to stay socially distanced while having an unforgettable day. Mandalay Bay Best for: Beach bums Sure, you might be in the middle of the desert, but that doesn’t mean you can’t take advantage of all the best parts of the beach! Mandalay Bay boasts an 11-acre aquatic playground that offers up a wave pool, lazy river, and even a lagoon—all right on The Strip. Four Seasons Las Vegas Four Seasons Las Vegas cabana MGM Resorts Best for: Sanctuary seekers There’s a lot of thrill and adventure to be found in Las Vegas—especially come summer time—but sometimes, all you really need is a blissed out spot to relax and be pampered. The Four Seasons Las Vegas is just the ticket for those looking to relax in luxury and privacy. Poolside amenities complimentary fruit and smoothies, and of course light bites and a myriad of mixed drinks, wine, and champagne.
f9e5bb6c188a4b5984bfbf04195b5cb5
https://www.forbes.com/sites/kaitlynmcinnis/2021/02/26/these-are-the-most-searched-travel-questions-right-now/
These Are The Most Searched Travel Questions Right Now
These Are The Most Searched Travel Questions Right Now Woman using laptop and making a reservation for her travel. getty We all miss the thrill of hopping off the plane after a long flight, armed with Google Maps and ready to explore a far flung destination, but as the war against COVID-19 continues, traveling has become more precarious than we could have ever imagined. Now more than ever, travelers are looking to stay more informed about their future trips, dream destinations, and travel restrictions, and that is reflected in what people have been Googling when it comes to travel. Club Med used Google search data to find the top ten travel questions people are searching for right now—and the results may surprise you. From the expected “when can we travel again” to more practical questions like, “is travel insurance worth it?” and “will travel resume in 2021?,” these are the top questions on everyone’s lips—and exactly how travel experts are navigating these questions. 1) Is traveling a hobby? “Yes. If you do it in your spare time for pleasure, it’s a hobby,” Club Med travel experts explained. “For many people, travel ticks those boxes. The rise of DIY holiday packages and price comparison sites has meant that many people can travel cheaply and more frequently across nearby countries. Because of this, more and more people consider traveling to be one of their hobbies.” 2) Why is traveling important? Not only does travel help broaden our understanding of the world around us and develop an appreciation for other cultures and traditions, it helps build memories with family and friends. MORE FROMFORBES ADVISORDo You Need Travel Insurance For An All-Inclusive Resort?ByErica LambergContributor 3) Is travel insurance worth it? In short… yes! Emergency medical treatment overseas can be pricey and especially during the pandemic, you don’t want to be stuck footing your own medical bill. 4) Is traveling safe right now? There’s no guarantee that traveling right now is completely risk free. 5) How do travel bloggers make money? “Travel bloggers can make money in different ways, including selling travel photos, affiliate marketing, placing ads on their blogs, social media collaborations and paid posts, and monetizing YouTube videos,” explains Club Med. “Travel bloggers with large platforms can sometimes also make money through public speaking opportunities and freelance writing gigs.” 6) Why is traveling good for you? The physical and mental health benefits of traveling include: the vitamin D boost from visiting sunnier climates can boost immunity and promote healthy bones, teeth and muscles.Traveling also helps promote gratitude, develop positive habits, and in certain cases, it can easily be considered educational as well. 7) Which travel credit card is the best? There are endless travel credit cards on the market right now—each of which with its own benefits and things to consider based on your lifestyle and needs. 8) Can traveling affect your period? Yes (unfortunately!) “Many people who menstruate experience changes to their menstrual cycle while on holiday,” explains Club Med. “Although traveling is usually associated with relaxation and de-stressing from everyday life, the stresses it puts our bodies through (late nights, diet changes) can cause changes to periods.” 9) Will travel resume in 2021? Nobody can really say exactly when travel will resume—but with vaccines becoming more available, it’s hopeful that travel will resume in the distant future. 10) Do travel agents still exist? While many people opt to plan their own holidays, according to Club Med, travel agents continue to thrive as specialists in areas such as sustainable travel, long-haul and ski trips.
2a4e5015c7ff57d95c9f3082caacc9d4
https://www.forbes.com/sites/kaitlynmcinnis/2021/02/26/this-classic-hotel-chain-is-encouraging-would-be-travelers-to-plan-post-pandemic-trips/
This Classic Hotel Chain Is Encouraging Would-Be Travelers To Plan Post-Pandemic Trips
This Classic Hotel Chain Is Encouraging Would-Be Travelers To Plan Post-Pandemic Trips The Guitar Hotel at Seminole Hard Rock Hotel & Casino Hollywood (Photo by Johnny Louis/Getty Images) Getty Images Let’s be honest: there’s no better feeling than planning your dream vacation. From scrolling through flight paths and scouring for must-try restaurants, to finally hitting “book” on that luxury hotel stay… the act of planning a trip is almost as fun as the trip itself! In fact, according to a 2010 study published in The Official Journal of the International Society for Quality of Life, just planning your trip actually does make you feel happier than finally going on the trip. The anticipation and build up that comes along with planning a trip lasts much longer than the vacation (in most cases, anyways), and having something to look forward to generally makes people more happy than those who don’t have an escape on the horizon. Needless to say, as we enter the twelfth month of the global COVID-19 pandemic, most of us have been putting off planning for vacations due to the uncertainty surrounding travel restrictions and the fear of getting sick—but the Hard Rock International is hoping to remedy that lack of anticipation with a new “Getaway Together” package. The brand’s global offer will include room discounts and exclusive benefits across participating hotel stays that will allow travelers to get excited about a potential getaway without having to actually go until they’re comfortable. Numerous Hard Rock Hotel properties are participating in the offer, allowing guests to receive discounts of up to 50% off on rooms, early check-in/late check-out based upon availability and a fully flexible booking policy with unqualified rates. Properties will also offer free stays for children, welcome beverages, complimentary travel insurance upon arrival, and even two complimentary COVID-19 tests per room. All locations have also implemented the SAFE + SOUND protocol—the brand’s health program developed by experts to guarantee sanitary protocols exceed expectations—to help ensure guests feel safe once travel resumes. MORE FOR YOUPhotos: Egypt’s 3,400-Year-Old ‘Lost Golden City’ Is Unearthed From Desert SandsA Flight Just Set A Record For Positive Covid-19 Cases — Here’s Why That Will Not Happen In The U.S.Harry Potter New York Slated To Open On June 3 Offers are available across 22 different Hard Rock Hotel locations—from Cancun to the Maldives—and booking is officially open. Additional information can be found on the official Hard Rock Hotels website.
b5d4b22cb9e17dd4fd3e002e6cb000da
https://www.forbes.com/sites/kaitlynmcinnis/2021/02/26/you-can-now-live-next-to-charles-dickens-london-home/
You Can Now Live Next To Charles Dickens’ London Home
You Can Now Live Next To Charles Dickens’ London Home Exterior view of 101 on Cleveland 101 on Cleveland English literature lovers and real estate aficionados have a new overseas property to add to their must-visit bucket list: the residence next to Charles Dickens’ childhood home. That’s right: the home next door to where Charles Dickens grew up (and opposite the building which inspired him to write Oliver Twist) is hitting the real estate market in London this week. The residence, which has had a makeover by design studio Bergman & Mar, is located on 101 Cleveland Street—just a hop, skip, and jump away from Dickens’ first home. English literature lovers are sure to recognize the Cleveland Street address in London, made famous as No.22 was Dickens’ first home. According to legend, there was a WorkHouse on this street that actually inspired him to write one of his most famous books—with the infamous line ‘please sir can I have some more’ was drawn from. Interior view of 101 on Cleveland 101 on Cleveland The three-bedroom residence, which has had a makeover by design studio Bergman & Mar, spread over 1,147 square feet with a dedicated home office and masculine design complete with 15 pieces of bespoke furniture and loads of paintings from the Modernist era. Inspiration also included Fitzrovia’s 18th century furniture makers like Thomas Chippendale. “In the 18th Century, Fitzrovia was the epicentre of London’s craft scene with skilled French and  Spanish artisans establishing intricate furniture stores alongside British traders like Thomas  Chippendale known for his extraordinary mid-Georgian and Rococo cabinets,” Dukelease Properties explained in a joint press release. “Today, design studio, Bergman & Mar, were inspired by Chippendale’s appreciation of style and eclecticism.” MORE FOR YOUSaudi Crown Prince MBS Pressed The Louvre To Lie About His Fake Leonardo Da Vinci, Per New DocumentaryThe 27 Most Active Volcanoes In The World And What Could Erupt NextItaly Introduces Covid-Free Islands To Save Summer Tourism Interested in learning more about this storied property? More information on the stunning property—including pricing, units available, and more on the iconic location of the building is available on the official 101 on Cleveland website.
8bbfbd14be529a01d5cdeb7b01a02c98
https://www.forbes.com/sites/kaleighmoore/2019/07/19/can-fashion-retailers-implement-sustainable-practices-as-demand-for-denim-grows/?sh=3e5c53e5edaa
Green Jeans: As Demand For Denim Grows, Can Retailers Implement Sustainable Practices?
Green Jeans: As Demand For Denim Grows, Can Retailers Implement Sustainable Practices? null Getty More than 364 million pairs of women’s jeans were purchased between February 2018 and February 2019, according to a study by NPD group. The demand for denim is only growing: Data shows the $16.4 billion industry for jeans in the U.S. grew 5% in 2018, driven largely by increased purchases within the women’s vertical. While the retail implications of this rising demand and product interest are exciting for brands with denim collections, the uptick in denim production it entails also comes with considerable environmental impacts. Sustainability in Denim shows that cotton cultivation and processing, for example, requires about 1,500 gallons of water to grow the 1.5 pounds of cotton necessary to produce a single pair of jeans. New efforts echo the need for change in this industry. On July 16, The Ellen MacArthur Foundation, a nonprofit focused on the circular economy and sustainable practices, released a set of guidelines called the "Jeans Redesign," which strives to address waste within the denim industry by setting minimum requirements around materials, durability and more. MORE FOR YOUMulti-Brand Retailer Two: Minds Steps In To Fill Luxury Void In NYCLuxury Brands Are Betting Their Future On China, But It May Be A Risky GambleMacy’s Backstage Move Is An Escalator Down, Not Up "The guidelines are based on the principles of the circular economy and will work to ensure jeans last longer, can easily be recycled, and are made in a way that is better for the environment and the health of garment workers," their press release stated. Big brands are taking note, as evidenced by news of Wrangler, Madewell and Gap signing on to participate in the foundation's call for more sustainable denim. Wrangler is working to reduce the energy required to dye its denim products, Madewell released a line of fair trade-certified denim, and Gap is working to achieve 100% sustainably-sourced cotton for its denim items by 2025. These well-known brands aren’t the only ones focusing on green efforts when it comes to denim. There are a handful of independent brands who are actually ahead of the curve when it comes to making denim more eco-friendly. Take Boyish, for example. This denim brand has a top-to-bottom sustainability model, including a manufacturing process that uses one-third the typical amount of water in production—all of which is then recycled and reused. For Jordan Nodarse, the founder of Boyish, the focus on sustainability came from firsthand experience working in a denim factory where he was hands-on with the jean production process. “I learned about all the different chemicals and how much water it takes to make a single pair of jeans,” he said. “It never seemed right to me to have to use so many resources to make [jeans], so I wanted to find another way.” Today, Boyish uses a dyeing process that leverages reduced indigo with 80% less sulphates and caustic soda than standard dye. What’s more: 20% of the brand’s products are made from deadstock or vintage fabrics that are then turned into new items. They don’t stop there, either. The brand’s shipping materials even fit within the sustainable framework by leveraging recycled paper packaging, labels made from recycled plastic bottles, and 100% compostable polybags. Size inclusive denim brand Warp + Weft is another company that has found a way to reduce the environmental impact of denim production. It was recently reported that they saved over 572 million gallons of water thanks to the implementation of eco-friendly production methods. Sarah Ahmed, the founder of Warp + Weft, said that for them, sustainability has been a culmination of state-of-the-art vertical integration and data-driven design that helps offset future inventory risk, all of which is seen through the lens of a more holistic approach to denim production. Translation: Sustainability doesn’t have to be an undertaking that breaks a brand’s bottom line. So what does all of this mean? Laura Alexander, the founder of sustainable marketplace Brightly, says it’s not too late for brands to get on board with more eco-friendly practices around denim, but that part of the responsibility falls on the consumer. “Shoppers should support smaller brands who start off sustainably from day one and demand change from big brands who need to be held accountable,” she said. At the end of the day, shoppers vote with how they spend their dollars. If sustainability is what consumers want, they'll need to be willing to pay for it.
27d283bc857484170b61cbcf7fc25f18
https://www.forbes.com/sites/kaleighmoore/2019/09/04/why-more-retail-brands-are-launching-visual-search-tools/?sh=215528c41bda
Why More Retail Brands Are Launching Visual Search Tools
Why More Retail Brands Are Launching Visual Search Tools To meet the demands of modern online shoppers, visual search is becoming more popular. Getty The way online shoppers look for products online is changing—especially for younger consumers. Visual search, which is search based on images rather than text, is on the rise. The reason: It helps consumers address questions that are hard to put into text-based searches, such as, “What item pairs best with these shoes?” or “Where can I find a similar jacket?” It’s growing in popularity: Data shows that 62% of Millennials want the ability to use visual search over any other technology, while Gartner research predicts that 30% of all searches will be “queryless” by as soon as 2020. As consumer habits shift around online product discovery, retailers are taking note. The visual search market is expected to exceed $25 billion this year—and by 2021, data indicates retailers that are early adopters of visual search are projected to increase their digital commerce revenue by 30%. "Visual search is like a swiss army knife,” said Ashwini Asokan, Founder of Vue.ai and Mad Street Den. “There are so many things you can do with it. Investing in the right image recognition tools should be a top priority for online retailers.” While visual search has been popular in the clothing vertical for several years now, brands in other industries are hopping on board with the trend as well. Glasses USA, for example, an online prescription eyewear company, recently announced its mobile visual search tool called “Pic & Pair.” MORE FOR YOUBeautycounter Looks To Accelerate Growth As The Carlyle Group Acquires Majority StakeLowe's Doubles Down On Contractors With Store Revamp As Home Improvement Spending BoomsDressbarn’s Rebirth Is A Tale Of Digital Smarts And Perfect Pandemic Timing The tool works by allowing customers to upload or take an image of a product, and based on that information, it will then showcase similar-looking products available on the company’s website. The digital optimization team at Glasses USA started developing this resource soon after they discovered that customers who used the brand’s search function spent six times longer on the site and were five times more likely to buy (in comparison to those who didn’t use the search tool at all.) In short—giving visitors using their product search tool an easier, more frictionless experience meant a much higher likelihood of sales. The functionality of the resource is rooted in artificial intelligence that assigns detailed textual tags to the brand’s inventory (which includes more than 10,000 items.) When a customer uploads an image, the AI-powered image recognition software then quickly finds identical and similar products and spotlights them for the mobile searcher. This speeds up the process when customers are looking for a specific product, but don’t want to invest time looking at hundreds of search results. “Customers often they have a clear idea of what they want, but have no clue how to look for it or describe it,” said Glasses USA CEO and co-founder, Daniel Rothman. “With this tool we’ve simplified the process by working bottom-up and having customers show us what they want—and we’re offering customers the retail experience of a sales clerk’s assistance without having to leave their homes.” Again, Glasses USA isn’t the only brand investing in visual search tools. Along with tech giants like Amazon, Pinterest, and Google, large fashion retailers like Alibaba, Neiman Marcus, ASOS, and Nordstrom have already been finding success with visual search tools as well. Some use on-site tools, while others have built this functionality into their branded mobile apps. John Xiao, Vice President of Technology at Nordstrom, shared that Nordstrom’s visual search tool is not only effective when it comes to aiding customers with product discovery and efficiently sifting through the brand’s extensive catalog, but it’s also been extremely accurate when it comes to helping customers spot the exact products they’re searching for. “Fashion is all about the visual,” Xiao said in a BizTech interview. “If the customer has a product in front of them, the best way to help them find it in your catalog is visual search.” Others working in the online sales environment believe that visual search will continue to rise in popularity, but that in order for it to achieve its full potential, brands will need to educate their customers on how it use it to their benefit. “Visual search has been around a long time, but I think the technology is only just catching up with the ambition,” said Joey Moore, a product marketing expert.
eb9dbe71e45ba60a38d17e19377b854d
https://www.forbes.com/sites/kaleighmoore/2020/07/15/retail-subscriptions-thrive-during-covid-19/?sh=1d04ceb32a0b
Retail Subscriptions Thrive During COVID-19
Retail Subscriptions Thrive During COVID-19 Subscription-based offerings are on the rise during the pandemic. Getty A new study from CouponFollow shows that during the COVID-19 crisis, many US consumers have leaned into subscription-based product—some for the first time. Of over 1,000 shoppers surveyed, one in five had purchased a subscription box to have products on-hand during the pandemic. The survey also showed the most popular subscriptions were HelloFresh (21%), BarkBox (20%), Blue Apron (19%), and Dollar Shave Club (18%). Retailers are taking note of this shifting behavior, and data projects that by 2023, as many as 75% of direct-to-consumer brands will have a subscription-based offering. Subscription growth is reportedly on the rise across industries and verticals, too. "At ReCharge, we've seen gross processing for subscription merchants up double digits during COVID-19,” said Luke Retterath, VP of Marketing at subscription billing platform ReCharge Payments. “While some verticals have certainly benefited more than others, we’ve seen positive performance across nearly all sectors. Additionally, we saw a large uptick in new subscription business created as businesses moved online at an accelerated pace in April and May.” MORE FOR YOUBeautycounter Looks To Accelerate Growth As The Carlyle Group Acquires Majority StakeLowe's Doubles Down On Contractors With Store Revamp As Home Improvement Spending BoomsDressbarn’s Rebirth Is A Tale Of Digital Smarts And Perfect Pandemic Timing This was certainly true for CBD company Equilibria: Demand for their subscription-based offerings shot up 100% beginning in March. Grounds & Hounds Coffee Co., which offers coffee subscriptions, also saw an uptick during the pandemic: Their monthly subscribers grew by 35% when shelter-in-place mandates went into place. So what drove this increased demand? Grounds & Hounds founder Jordan Karcher shared that while they didn’t introduce any special pricing or promotions around their subscriptions, they did put a greater emphasis on the offering with marketing messages that focused on the subscription’s savings, convenience, and flexibility—and it appears to be resonating. BarkBox is another retailer with subscription-based offerings that are thriving during COVID-19. In their case, they’ve found that by leveraging organic social posts that tap into an element of viral pop-culture references and increasing engagement, these pieces of content are performing just as well as those that are paid (and are driving sales to the point of selling out of certain products.) For example: When they noticed they had an excess of toys in a specific cat design that resembled Carole Baskin of Netflix NFLX ’s popular show Tiger King, their social team created an organic post on social media about “Cool Cat Carole” that garnered over 238,000 impressions across social channels, attracted new subscribers, and ultimately resulted in the toy selling out. Still, in other cases, subscription offerings are new, first-time offerings for brands who are leveraging subscription boxes as workarounds to business models that are no longer viable—like in-person marketplaces. This was the case for Bide Market, which pivoted from an event-focused marketplace model to launching Bide Box, a seasonal online subscription featuring a collection of eco-friendly and fair trade products from would-be market vendors that are shipped straight to shoppers’ homes. “The pandemic deeply affected our business model, and I had to think fast on how we could continue our mission in supporting our incredible brand partners,” said Parisa Morris, founder of Bide. “So far, the response to this new subscription offering has been great.” So will this growth trend continue within the world of subscriptions in the coming months? Tien Tzuo, cofounder and CEO at Zuora ZUO , believes it will. "Subscriptions continue to deliver above market growth,” he said. “If these moments of time tend to accelerate underlying trends, we believe the current crisis will only accelerate the shift of the modern global economy towards digital services and subscription models."
90c5c3c99565b91f22406d86cf71ff92
https://www.forbes.com/sites/kaleighmoore/2021/02/23/the-rise-of-curation-how-online-retailers-are-cutting-through-the-noise/?sh=26d47a726bb1
The Rise Of Curation: How Online Retailers Are Cutting Through The Noise
The Rise Of Curation: How Online Retailers Are Cutting Through The Noise Tiny Bodega's curated snack box is one example of the curation approach used to aid in product ... [+] discovery. Tiny Bodega In the world of online retail, a new trend is emerging—and it centers around the theme of curation. Look around the internet right now and you’ll find more and more offerings of curated collections of products packaged into box-style offerings—from snacks and drinks, to cosmetics, to geographically-themed gifts and beyond. Data from McKinsey reinforces this: Their research found that “curation subscribers”, or subscribers who prioritize personalized, curated product selections, make up as much as 55% of all box-style subscriptions. If you ask Andrea Hernández, creator of the food and beverage trend newsletter Snaxshot, this trend is emerging as a reaction to market saturation and the constant influx of new brands, as well as experiences like “one-stop online shops” that can be overwhelming for shoppers with their massive product catalogs. “To me, the curation approach is about removing friction around the discovery process,” Hernández said. “It's also why you see brands like Pepsi trying their hand at direct-to-consumer retail by bundling their products into kits based on utility, like they’ve done with Pantry Shop.” Overwhelm is indeed a problem when it comes to shopping online: Consider that a search query for a single product can generate more than 200,000 results. With so many near identical products to choose from, how is a shopper supposed to decide what to buy? MORE FOR YOUWalmart And McDonald’s Break Up A 30-Year MarriageDressbarn’s Rebirth Is A Tale Of Digital Smarts And Perfect Pandemic TimingRetail’s Recovery Is In Sight As The Vaccine Injects Hope For 2021 Enter curation: A buzzword that’s cropping up more and more as of late. Within the online shopping landscape, we see the curation approach evidenced with the introduction of offerings like Tiny Bodega’s curated “discovery boxes”, which are selections of culinary items organized by themes like dinner, snacks, and brunch. Similarly, PopUp Grocer recently launched a Nordstrom collaboration wherein founder Emily Schildt spotlights a curated selection of her favorite 150 food and beverage brands. Influencers are coming into the fold as well (and bringing their engaged, loyal followings with them.) Curated health food site BubbleGoods, for example, promotes a collection of wellness experts who share their favorite products available for purchase on the site. Then there’s the gifting category, where curated collections of goods make for easy, done-for-you gifts by mail (which has seen a surge of demand in recent months during COVID-19.) In fact, of more than 1,100 consumers surveyed about their shopping habits over the holiday season in January 2021, 45% said they bought most of their gifts online. My Trove Box My Trove Box My Trove Box is one example of the curated gift box approach, founded by interior designer and marketer Karin Srisilpanand. The curated selection of home decor items is focused on supporting women and minority-owned artisans and items with an interesting backstory, often with ties to the makers and the sustainable practices behind their creation. “There are existing lifestyle or home decor subscription boxes that often include a lot of filler items that either find their way to some dark drawer or get tossed out,” Srisilpanand said. “We wanted to diverge from that model where value equates to more ‘stuff.’ It isn’t just about filling our home with decor either, but to do it in such a way where we are being socially responsible, and mindful of its form and multiple functions.” Another is Bestowal Gifts, which offers a variety of curated collections based on various themes, but also has a custom-made option wherein curated products can be selected and then outsourced for fulfillment. It’s become a popular marketing tactic for product discovery and building brand awareness, used by publicist and public relations teams alike. Teams doing pro-bono work like PR4Good leverage curated Bestowal Gift boxes to promote female and BIPOC-owned startups. “We all would look at these large corporate mailers and think: there has to be a better way,” said Morgan Bellock, publicist and member of PR4Good. “We wanted to create a mailer with less waste and less filler that would spotlight these smaller brands we’re passionate about, but that would also be relevant.” As the online shopping environment becomes more crowded every day as new brands launch, will curation be the key for brands working to get in front of new buyers, building brand awareness, and ultimately driving sales? Only time will tell—but if nothing else, it certainly seems to help new brands gain a competitive foothold.
0e9fa5fc2dd86725d0b23007a01192e5
https://www.forbes.com/sites/kalevleetaru/2016/03/03/using-googles-deep-learning-ai-to-geolocate-global-news-imagery/
Using Google's Deep Learning AI To Geolocate Global News Imagery
Using Google's Deep Learning AI To Geolocate Global News Imagery A cyclist rides past Google Inc. offices inside the Googleplex headquarters (Michael... [+] Short/Bloomberg) Two weeks ago researchers from Google and RWTH Aachen University unveiled a paper detailing a new deep learning neural network system called PlaNet capable of “superhuman” accuracy in taking an arbitrary photograph and estimating the geographic location it depicts anywhere on earth. In essence, by training the system on a massive archive of geotagged imagery from across the entire world, the system learns what each location on earth looks like and is able to estimate the likely location of a new image handed to it, even if the image is taken from an entirely new angle in different lighting with a different foreground. In a sign of the scale that modern deep learning systems operate at, the researchers used two different approaches, one trained on 91 million images from across the web and a second trained on 490 million public images from the Google+ platform. While it is unclear whether it uses the same algorithms, Google’s new Cloud Vision API service offers the same capability to take an arbitrary image and estimate the geographic location it features, down to the precision of a street corner  in some cases, anywhere in the world. To see what this might look like in a real world application, a total of 19,635,176 images found in local, regional, national and international news coverage from every country in the world from December 28, 2015 to March 2, 2016 (just over two months) was processed through the Cloud Vision API. These images represent a randomized subset of the total universe of all global news imagery published during that two month period. Of those 19.6 million images, a total of 336,764 images (1.7%) were recognized by the Vision API as depicting a known geographic location, featuring 36,691 distinct locations on earth. The final results can be seen in the map below. Click on the image to open the interactive clickable map that allows you to click on any location and see up to the first 50 images featuring that location – you can click on any of those images to open the article the image was found in). Zoom into a major city like Washington DC, New York City, London or Paris and see the incredible street-level precision of the Vision API, recognizing individual buildings and street corners across the city in many cases. Map of locations recognized from worldwide news imagery December 28, 2015 to March 2, 2016 using the... [+] Google Cloud Vision API (click on map to open interactive clickable map to see the locations featured) (Credit: Kalev Leetaru) African media tends to emphasize imagery of individual people such as political leaders over imagery of places, presenting fewer opportunities for the algorithms to identify likely locations in the backgrounds, leading to a relative scarcity of matches above. This may also reflect less available geotagged imagery of the continent that can be used for training. While most well-traveled people would likely be able to recognize an image of the Eiffel Tower or the Washington Monument, the map above shows the ability of neural networks to absorb hundreds of millions of images depicting scenes from around the world, to learn from all this imagery what daily life looks like across the planet and to use that knowledge to estimate the location of new images. Even a large team of human experts would have difficulty recognizing images of obscure buildings or remote town squares, yet plenty may be seen in the map above. Putting this to work in a real world context, imagine the ability to triage the stream of images emerging from a war-torn area or a city struck by a natural disaster and construct a live map of the state of the area in realtime that can be used for disaster response. The Cloud Vision API is still in beta mode and many images were processed while the API was in alpha and pre-alpha release. Recognizing images from almost every corner of the earth is an extremely difficult task and you will undoubtedly find errors as you click around the map above. In some cases this may also be due to an image in an article being changed since the article was published, while the URL remained the same. Yet, despite these limitations, the map above represents a remarkable look at the future of deep learning and neural networks. The ability to process more than 19.6 million images from news coverage from across the world and recognize the geographic location in the background down to the level of a street corner or building across the face of the earth at speed of just 1-3 seconds per image offers an incredible look at the “superhuman” future of artificial intelligence that will only grow more powerful as the universe of data to train them grows. I would like to thank Google for the use of Google Cloud resources including BigQuery and the Cloud Vision API and the Cloud Vision API team, along with CartoDB for the use of their online mapping platform.
ebd658fbe2c0c5f55e4eb82623b7f6d4
https://www.forbes.com/sites/kalevleetaru/2016/03/19/are-we-mining-data-instead-of-answering-questions/
Are We Mining Data Instead Of Answering Questions?
Are We Mining Data Instead Of Answering Questions? A portion of the million Facebook photographs collated by artists Paolo Cirio and Alessandro... [+] Ludovico displayed at the Big Bang Data exhibition at Somerset House in December 2015. (Peter Macdiarmid/Getty Images for Somerset House) One of the most striking things about the “big data” world of today is the focus on mining data over answering questions. So many projects start with asking what data is easily obtainable and which tools are most easily amenable to working with that dataset, rather than asking what question the analysis is trying to answer. The result is a landscape littered with data-intensive projects that prioritize technical prowess of execution over the robustness of the findings derived from the analysis.  What does this mean for the future evolution of the field? Perhaps the biggest challenge of the big data revolution today is that it is being driven to a large extent by computer science, rather than the disciplinary fields whose questions it is attempting to answer. This creates a world in which, for example, the majority of sentiment mining tools come from computer science, rather than psychology. The focus becomes on algorithms and data rather than questions, with the end result that many tools today rely on the same approaches developed half a century ago for punch card computers. A typical sentiment analysis dictionary today might be induced from vast volumes of social media data collected over a brief period of time, learning that the words “dentist” and “economist” tend to be associated with highly negative emotions or that President Obama is associated with either very positive or very negative emotions, depending on the time frame the training data was compiled. Tools often build upon previous dictionaries, many of which draw from emotional connotations of the 1980’s and 1990’s, leading to poor fits for social media content that emphasizes abbreviations like “lol” and emoticons and emojis. Alternatively, a researcher might use Amazon Mechanical Turk to ask users to score a set of documents by the level of “anxiety” they provoke, averaging hundreds of thousands or millions of ratings together to build a new “anxiety” sentiment analysis tool. Yet, without a psychological basis or understanding of how “anxiety” is defined or conceptualized and without taking into account socio-cultural differences in the kinds of language and concepts that might trigger anxiety in populations across the world compared with the particular demographics of the raters used in the particular project, it becomes difficult to understand how to utilize the resulting dictionary and what its biases and limitations may be, compared with a dictionary developed from the top down, starting with a precise definition of what particular kind of “anxiety” should be measured by the tool. Analyses today tend to start with a dataset and emphasize the scale of the dataset and the complexity or novelty of the algorithms used – the larger the dataset and the more complex the method, the more likely an analysis will be published in an academic journal or achieve viral status online. This in turn has created an arms race where the numbers reported are often not the actual numbers used in the analysis itself. A few months ago I saw an analysis that claimed to have performed pattern mining on a ten petabyte dataset. Impressed, I asked what tools the researchers had used to tractably perform complex pattern extraction on a dataset of that size. The answer was that they had performed a simple numeric range search to extract a one gigabyte subset that they actually performed their analysis on. While reporting the analysis as an exploration that reflected the underlying trends of ten petabytes of data, the researchers' conclusions were in fact drawn from just a one gigabyte subset that was carefully constructed by them to yield results likely to be highly mediagenic. This illustrates one of the fascinating paradoxes of the emerging “big data” world: despite having exponentially more data available at our fingertips, the amount of data we actually incorporate into our analyses has not increased at the same rate. In the past a researcher might have incorporated one gigabyte of a ten gigabyte dataset into an analysis, while today that researcher might extract one gigabyte from a ten petabyte dataset. In short, the “big data” world has resulted in the accumulation of unimaginable volumes of data, but much of the analysis being done is still locked in the world of “small data.” In some ways, the representativeness of our analyses may actually be decreasing in the big data era as we look at smaller and smaller subsets of larger and larger datasets. At the same time, as datasets have grown larger and more complex than any human can reasonably manually inspect and new classes of highly uneven data like social media have come into being, our understanding of the nuances and biases of the data we use has decreased. An Excel spreadsheet of a few hundred rows can easily be manually reviewed by a human prior to analysis to fix typographical errors, address missing or corrupted values and look for other outliers. On the other hand, a multi-petabyte database of trillions of rows can only be examined through automated filtering tools. Spending weeks or months of time carefully examining a dataset for errors and potential bias is a difficult sell in a world where few academic journals publish characterization analyses and researchers in the commercial world find it difficult to spend weeks or months of their time on bias studies that have only long-term payoffs. The result is that datasets become gold standards influencing the findings and theories of countless fields of study with little understanding of their geographic, socio-cultural and other biases and limitations. Even the definitions of concepts like “influencers” and “active users” have become blurred in the online world. I recently saw a social media analysis that presented Justin Bieber as the most influential person worldwide with respect to the Syrian civil war. While it may be the case that his social media posts garner considerable online visibility and discussion, it is highly unlikely that a tweet by Mr. Bieber with his plan for peace in the Middle East would dramatically alter the landscape of the current conflict. Driving this is the computer science world’s traditional focus on correctness of output and execution over correctness of fit. If an algorithm compiles and executes on a given dataset without error, there is a tendency to trust the results without asking whether they logically make sense. As an example, I once had a doctoral student on loan to me from one of the top data mining faculty and asked him to write an algorithm that could extract a wide variety of date/time information from a diverse collection of text that included high amounts of OCR error. A few weeks later the student proudly presented his final results, having extracted 40 million dates from a test corpus of just 1 million words. The student argued that because his code executed without error it must be correct and it took almost an hour of discussion for him to finally recognize the logical impossibility of finding 40 date references for every word in the collection. Of course, on the other hand, there are myriad counter-examples of human analysts discarding legitimate machine findings when they disagree with intuition. Perhaps the greatest challenge facing the big data world is the recognition that data analysis is not the same thing as question answering. In the political science world, human analysts have long been used to read news articles and compile quantitative catalogs of the global activity described within. On the surface, it would seem that using vast teams of humans would yield nearly perfect quality results. Yet, in practice humans are actually quite poor at such quantitative tasks, with intercoder reliability (whether two different people reading the same news article will catalog it the same way) and intracoder reliability (whether the same person will catalog an article the same way when seeing it again a few days later) presenting huge challenges to robust results. It is also difficult to assemble teams with extensive language expertise to be able to catalog material across tens or even hundreds of languages on an ongoing basis. Moreover, humans tend to resolve task ambiguity in highly distinct ways that draw heavily on their individual backgrounds and experiences. Asking a team of humans whether an article about Mexican drug cartel violence should be cataloged under “military violence” due to the military nature of the equipment they use and their frequent clashes with government troops or “common crime” due to the violence being committed by criminal actors will typically yield a wide range of responses. Similarly, computerized sentiment analysis is often criticized for failing to recognize sarcasm or humor. Yet, recognizing sarcasm requires having sufficient background knowledge to understand that a given statement is in fact false, while a comment one person finds hilarious might be deeply offensive to another. In a past project I was involved in, we had a team of university students score a set of historical newspaper editorials as either positive, negative or neural in tone.  The students failed to recognize many of the known highly sarcastic editorials from previous decades because they lacked the historical knowledge to understand that the statements being made were obviously false and simply treated the articles at the positive face value of their wording, much as a machine would. Map of all locations mentioned in New York Times and BBC news coverage (all languages) during March... [+] 2015 as monitored by the GDELT Project (Credit: Kalev Leetaru) Yet, let us assume for a moment that with a large enough team and a sufficient reconciliation workflow, one could achieve 100% accuracy at codifying every activity mentioned in the New York Times, which has been a favored source for political event coding over the decades. Despite superb coverage of global events, the Times simply does not have the reporting staff or editorial bandwidth to report on every single micro-level event that occurred across the entire planet today. Researchers interested in micro-bore protests and routine day-to-day activities in the UK, for example, would likely turn to a British news outlet like the BBC. Visualizing this, the map above compares all of the locations mentioned in New York Times (green) versus BBC (yellow/orange) (this includes all BBC output across all of its local language editions) articles monitored by the GDELT Project during March 2015 in 15 minute increments. While both sources catalog major events across the world, neither perfectly covers the entire planet at micro resolution. Looking at the map above, it is clear that even a 100% accurate codification of the New York Times will not yield a 100% accurate codification of global society. Therein lies one of the promises of big data: that by blending together the collective output hundreds of thousands of worldwide news outlets, including local outlets in their local languages, one can achieve a collective view of the world that is far more representative and holistic than any single outlet can offer by itself. While messier and noisier than small volume human coded data, big data offers the ability to build composite views of complex environments, looking across massive numbers of sources and languages and to triangulate across their often disparate views. In contrast, much of the investment over recent years has focused on high quality processing of a relatively small number of outlets using either humans or manually tuned automated coding systems custom built for primarily Western and English language outlets. Again, the emphasis on computer science mindsets has emphasized a focus on software and algorithmic development over investment in data collection and improved understanding of how to access local events and perspectives from the non-Western (and often non-Internet-connected) world. There has also been an overemphasis on surface over deep analytic techniques. The digital humanists I speak with frequently lament the movement towards surface techniques like word counting and ngrams that have come to dominate areas of the digital humanities landscape. To a literary scholar whose focus is on the thematic undertones of a work or a historian tracing the evolution of a particular worldview over time, counting words and phrases is starkly at odds with the analytic resolution they require. As just one example, to disambiguate a word like “Paris” to determine whether it refers to Paris, Illinois, Paris, France, the socialite Paris Hilton or even the Paris Hilton hotel requires being able to access the surrounding context stretching sometimes for pages around the reference. In this way, ngrams are woefully mismatched for the kind of context-rich analyses that define the humanities. Putting all of this together, we see a “big data” world today being driven by datasets and algorithms and drawing its inspirations and mindsets from computer science, frequently emphasizing technical prowess over goodness of fit. Even as we have access to more data than ever before, the actual analyses we perform with all of that data tend to access only minute subsets, meaning that as a percentage of available data our analyses are actually becoming less representative. Most importantly, as the field of big data evolves and matures, it must recognize that accuracy of analysis is very different than accuracy of result – 100% accurate codification of a single dataset may actually yield a far less useful insight into the question of global stability trends than a noisier and more error-prone analysis that incorporates hundreds of thousands of local sources in local languages to build a composite view of global society. This is promise of big data – the ability to rise above the single-source analyses of the past towards composite understandings of society, but to do so we must take the time to understand the nuances of our new datasets and to recognize that answering a question of interest may take more than plugging a dataset into an algorithm.
2b58b6fcec08d2374dc395fcded853f6
https://www.forbes.com/sites/kalevleetaru/2016/06/20/a-look-around-the-world-at-how-news-websites-are-adopting-googles-amp-html/
A Look Around The World At How News Websites Are Adopting Google's AMP HTML
A Look Around The World At How News Websites Are Adopting Google's AMP HTML A smartphone being used to take a crowd photograph at Euro 2016. (JEAN-CHRISTOPHE MAGNENET/AFP/Getty... [+] Images) Earlier this year I explored the state of journalism on the web, performing a detailed technical analysis of the myriad tracking beacons, advertising inserts, dynamic content and other technologies bogging down modern news websites. A key finding was that news websites rate among the worst pages on the entire web in terms of user experience. One particularly egregious news site made more than 6,500 requests and consumed more than 100MB of bandwidth just to display a single article during tests. As the web has rushed towards "mobile first," a number of initiatives seek to help websites optimize themselves for the mobile web, perhaps most visibly Google’s Accelerated Mobile Pages (AMP) Project. How is AMP’s adoption faring across the world? Over the last several months I’ve spoken with a wide range of web designers and publishers in the journalism community ranging from tier one international outlets to small local outlets about how they are adapting to the mobile revolution. One of the most striking trends in those conversations has been the number of outlets that have argued that responsive design is the same as mobile optimized and that by simply using responsive templates for their websites, they were already completely mobile first. The problem with this belief is that responsive design refers only to the appearance of a page in a mobile browser, not the resource requirements needed to render that page. In short, a responsively designed page can render beautifully on a mobile device, but still consume massive amounts of bandwidth and battery life and make the device perform sluggishly. All of the advertisements and tracking beacons and dynamic JavaScript code all correctly adapt to the reduced screen size and resize and relocate on the page to display properly, but they are still ultimately all there, lurking in the background and bogging the device down to a standstill. Mobile optimized, on the other hand, refers to the absolute minimization of all page elements to reduce bandwidth and battery drain. Such pages tend to heavily restrict the use of dynamic content and limit advertisements and tracker beacons to those provided by companies that offer mobile versions of their platforms that reduce the size of advertisements, eliminate or reduce beacon heartbeats and place a minimal burden on the client browser. Facebook launched its Instant Articles initiative last year that allows news outlets to publish their content directly onto Facebook’s platform and leverage Facebook’s existing content display infrastructure to minimize rendering time. However, it is limited to displaying pages within Facebook’s own ecosystem and cannot be used by news outlets to optimize pages hosted on their own websites. In contrast, Google’s Accelerated Mobile Pages (AMP) Project is an open initiative that consists of a set of design principles and JavaScript libraries, hosted on Google’s worldwide CDN, that allow publishers to offer mobile-optimized versions of their pages on their own websites. Already a wide array of CMS platforms including WordPress offer AMP plugins to automatically provide AMP compliant pages without user intervention. An AMP-compliant news website embeds a special “link” HTML tag in the page that provides the URL of the alternative AMP page. The actual AMP page closely resembles the sparse minimalist HTML of the 1990’s where every HTML tag mattered and coding placed an emphasis on minimization over capability. This means that AMP pages dispense with the enormous bloat of typical responsive HTML pages, allowing them to render lightning fast with a minimal bandwidth and battery footprint. To illustrate the enormous difference between mobile-optimized responsive design and mobile-first technologies like AMP HTML, a brand-new Windows 10 computer running on the latest Dell desktop was used to access the mobile version of a randomly-selected article from the New York Times’ website using the latest version of Google’s Chrome browser set to emulate an iPhone. Displaying the page consumed 22% of the system’s CPU and made 310 requests to 41 domains including sites like “mookie1.com,” “moatads.com,” “iasds01.com,” “keywee.co,” “visualdna.com,” “beacon.krxd.net,” and “analytics.twitter.com.” While the mobile-optimized version displayed perfectly in the emulated iPhone display, it still consumed a fairly large amount of resources just to display a single article. In contrast, the BBC uses Google’s AMP HTML technology to provide its mobile experience. A randomly selected BBC page was displayed using the same setup and consumed just 2% CPU and made just 37 requests to just 8 domains, all but three of them being BBC domains. It also completed loading in just 0.8 seconds compared with 2.9 seconds for the Times article, despite being similar lengths. In other words, while both the New York Times mobile-optimized page and the BBC AMP page displayed beautifully on the emulated iPhone and offered a perfect visual experience, the Times’ mobile article came at considerable cost in terms of consumed bandwidth, memory and battery life, while the BBC AMP page absolutely minimized resource consumption. It is clear that responsive design does not equate to mobile optimized – to be truly mobile friendly, pages must absolutely minimize the resources required to display the page, not just ensure it fits properly on a smaller screen. Since its official debut last October, how far has AMP spread? Beginning this past April 20th, the GDELT Project has been scanning all online global news media it monitors for the existence of AMP alternate versions of all articles. GDELT focuses exclusively on hard news, ignoring sports and entertainment, but by virtue of tracking coverage in all countries worldwide across 65 languages it reflects a strong cross-section of the state of news media today. Of all worldwide online news coverage it monitored over the last two months, 37% of articles offered an AMP version. AMP appears to have spread throughout the world, from Kenya’s Capital FM to Sudan’s Alnilin, Kyrgyzstan’s kp.kg to Venezuela’s Globovision. The map below visualizes the percent of all coverage monitored by GDELT from each country during the last two months that offered a mobile-optimized AMP version. In some countries a relatively small number of high-volume publishers offer AMP versions of their pages, skewing the overall nationwide percentages. One driving force in AMP adoption appears to be the large number of WordPress-powered news websites offering AMP editions, meaning countries with high densities of WordPress-based news sites often exhibit higher AMP densities. Several WordPress plugins are available that automatically generate AMP versions of all pages, requiring no further human intervention. Percent all online news coverage by country monitored by the GDELT Project April 20 to June 20, 2016... [+] that offered an alternate mobile-optimized AMP version (Credit: Kalev Leetaru) AMP is seen to have spread worldwide with the notable exceptions of Africa, the Middle East and Central Asia and a few countries in Southern and Northern Europe. The strong showing of AMP in the Asia-Pacific region is likely a response to the high penetration of mobile ad blockers there. According to recent statistics, 20% of all smartphone users worldwide use ad blockers, but those numbers vary by country from 2.2% of American smartphone users to 36% in Asia-Pacific and 66% in India and Indonesia. There is thus considerable incentive for publishers to adopt technologies like AMP that speed up the mobile browsing experience and make ads less intrusive such that users may feel less inclined to enable ad blocking. While the map above reflects only the news coverage monitored by the GDELT Project and thus excludes certain industries like sports and entertainment news, it nonetheless offers a useful barometer of the spread of mobile optimized content across the world. In particular, it is clear that websites that merely subscribe to responsive design principles without additional resource minimization will still incur a substantial bandwidth and battery cost to mobile users even though their pages may display correctly. As the BBC’s website demonstrates, the proper use of mobile optimization technology like Google’s AMP Project has the potential to massively reduce resource consumption and speed page load many times over. The map above also shows that domestic African media and parts of the Middle East and Central Asia have yet to adopt AMP in meaningful numbers, though as smartphone penetration in those regions continues to grow it is likely that AMP usage will grow alongside. It is particularly fascinating that more than a third of all news coverage worldwide that GDELT monitors today offer AMP versions of their content, suggesting that AMP is being rapidly adopted as a defacto standard for mobile news publishing. In the end, as the news world finally begins to catch up to the mobile revolution, it is likely that technologies like AMP will reshape how we browse the web while on the go and perhaps push back against the rise of ad blockers as online advertising and design evolves from resource hog to a transparent and unobtrusive part of a truly “mobile first” experience. I would like to thank Google for the use of Google Cloud resources including BigQuery and CartoDB for the use of their online mapping platform. I would also like to thank Felipe Hoffa for his assistance in creating early templates for several of the queries.
850059ee2a3631293556ffbb837eec72
https://www.forbes.com/sites/kalevleetaru/2016/09/24/the-media-got-it-wrong-what-facebooks-video-ads-issue-tells-us-about-big-data-metrics/
The Media Got It Wrong: What Facebook's Video Ads Issue Tells Us About Big Data Metrics
The Media Got It Wrong: What Facebook's Video Ads Issue Tells Us About Big Data Metrics Mark Zuckerberg speaks during an event at the company's headquarters (David Paul Morris/Bloomberg) Over the past month and especially the past week, there has been a considerable amount of discussion and media coverage regarding Facebook’s acknowledgement that one of its key video display metrics had generated confusion. In short, the Average Duration of Video Viewed was defined in its documentation as “total time spent watching a video divided by the total number of people who have played the video” while in reality, the metric actually recorded “the total time spent watching a video divided by *only* the number of people who have viewed a video for three or more seconds.” As the advertising community digested this news and Facebook replaced this metric with a new one, it generated a firestorm of criticism about the opaqueness of the metrics that define the online world. Yet, from the standpoint of someone with a data sciences background, it is difficult to understand what the perceived controversy is about and why many in the advertising community feel misled by Facebook’s documentation. Indeed, a quick perusal of the leading headlines from the last 48 hours are filled with ones describing an almost sinister attempt to mislead and deceive the advertising community. But, if one simply looks back to Facebook’s original announcement of the video metrics dashboard, one will find that they prominently and clearly announced this definition from the very first release. On May 5, 2014, Facebook announced its forthcoming new video metrics dashboard giving content authors better insights into how their videos were performing on the platform. Right up at the very top of the page, Facebook states “a ‘video view’ is defined as a view of three seconds or more and will appear for all videos, including those that come to life as people scroll through News Feed.” While at the time this redefinition of a view from YouTube’s 30 seconds to Facebook’s 3 seconds proved controversial, Facebook clearly and unequivocally offered a precise definition that all of its platform video metrics would define a “view” as a duration of at least three seconds. In particular, watching this story ricochet through the echo chamber of the popular media, one would think Facebook had been secretly trying to inflate the popularity of its video platform and that it had willfully mislead users of its dashboard by failing to clarify that a “view” meant 3 seconds or longer. Given all of the calls from industry leaders for greater transparency into how metrics are computed and to allow for external auditing of those metrics, one would again be forgiven for assuming that Facebook had been fudging the numbers. In reality, the evidence to date paints a picture not of Facebook erroring in its computation of its metric, but rather the advertising community failing to actually read Facebook’s own documentation of what constitutes a “view.” Facebook did not respond to a request for comment, but as someone whose work focuses heavily on how we quantize our world into precise measurable metrics, it has been fascinating to see the strong reaction to Facebook’s clarification of its video viewership metric given that it had already precisely defined what constituted a video “view.” The visceral reaction to Facebook’s announcement appears to be a combination of legions of technology journalists who are simply repackaging viral stories without digging deeper and an advertising community that made wrongful assumptions about what their metrics were telling them instead of reading the documentation. At worst, Facebook is guilty merely of having its documentation spread across two different pages rather than copy-pasting it as a reminder for the harried analytic professional who was in too much of a rush to do a quick Google search on what Facebook considered a “view.” While others might argue Facebook's previous definition deviated from other accepted industry norms of simply counting every access of a video, again when it comes to the analytic world, one always has to look at documentation, not make assumptions from one's personal experience. This is not a story limited to Facebook. I encounter such definition issues every day when it comes to everything from social media to sensor data. As I’ve written here again and again, the rise of easy access to data and computing tools means people are increasingly using data without taking the time to understand what it is they have in their hands. There certainly is truth to the fact that many data vendors do not provide all of the information necessary to fully document their datasets (for example, how platforms define what constitutes an “active user” is notoriously fuzzy and fluid) or have errors or mismatches between what’s in their data and what the documentation says should be in there. But, at the end of the day the majority of the troubles people run into when it comes to understanding data comes from a failure to carefully examine it before jumping right in and using it in a real world application. Perhaps in the future the advertising community will spend a little more time reading the definitions of the metrics they use and perhaps the broader analytics community will use this as a teachable moment in considering how they communicate their own metrics to their users to ensure they don’t run into similar errant assumptions. Always remember that data itself is meaningless without the definition that says what it measures and when publishing data always make sure to make those definitions precise and well documented for your users.
04e45c1584acc4d3ed89121d2fd43399
https://www.forbes.com/sites/kalevleetaru/2016/12/22/has-the-world-of-big-data-forgotten-africa/
Has The World Of Big Data Forgotten Africa?
Has The World Of Big Data Forgotten Africa? As Silicon Valley rushes to mine the world’s data, we must not forget that the big data world does not capture the world so evenly. Africa in particular presents a glaring blank hole in so many of the world’s datasets, but just how bad is this gap? Mark Zuckerberg, the founder of Facebook once allegedly told colleagues “a squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” Indeed, this appears to have carried through to the design of his platform, with Facebook’s original Trending Topics feed being almost completely devoid of news sources from Africa, meaning that stories from across the continent could not appear to Facebook users until and unless they were covered by the American media, leaving out the majority of the continent’s events. The rest of social media is not much better, with Twitter failing to ever really catch on across the continent. This is especially problematic as so much of the field of “social media monitoring” that has come to define the big data revolution is based on the Twitter firehose and thus missing Africa and parts of the Middle East, blinding us to conflict in the region. When it comes to web searches, things don’t appear to be any better. The Google Trends team generated a map for me this past June showing how often Americans searched about each country of the world over 2016. Africa stands alone as the least-searched continent by far. These maps illustrate starkly the degree to which people simply aren’t talking about Africa and not interested enough to search about it. This begs the question of whether the news media is similarly not covering the continent or whether there is copious coverage, but it simply isn’t resonating with people. The animated map below shows all of the locations mentioned in New York Times and BBC coverage during the month of March 2015 in 15 minute increments. While Africa certainly does get mentioned, the map makes it clear that it is not a priority for either outlet. Of course, the web isn’t the only source of news – television news in particular still plays a key role in the US. Yet, as the map below shows, television news is not much better in its coverage of Africa. If you look at the animated map of all locations mentioned each month on American television news, the lack of interest in Africa is even more striking in how rarely most African countries are mentioned – often many months will go by without a single mention of a given country. In fact, the map below bears a striking resemblance to the Google Trends map of which countries Americans search the most about. That doesn’t tell us whether a lack of media attention leads to a lack of search interest or vice-versa, but the similarities are remarkable. In short – the media isn’t covering Africa and people aren’t searching about it. Even the US intelligence community, with its vast resources, appears to have little interest in Africa. Over the period 1994-2004, US OSINT agencies (responsible for monitoring news and other “open” sources around the world) largely observed the continent through the eyes of European media outlets like Agence France Presse, with little actual monitoring of domestic outlets. In fact, this is one of the reasons US Government-funded monitoring programs missed the first glimmers of the Ebola outbreak, since those early warning signs first appeared in domestic French-language broadcast outlet, which is much harder to monitor than English-language Western online news outlets that can be searched via Google News. If people aren’t talking about Africa and aren’t searching about it and Western online and television news media aren’t covering the continent, does this mean there simply isn’t much to cover there? The map below shows every location where the open data GDELT Project monitored news coverage from or about from February to July 2015, colored by the primary language of news monitored from that location. Absent the Sahara Desert with its low human population, the map below shows Africa in fact has quite a lot of media and media-covered events that is being missed by traditional Western outlets and social media. Indeed, the importance of processing French and Portuguese language material to capturing Africa is clearly illustrated below. In short, this map shows us how much news coverage the continent generates that simply isn’t being monitored through the traditional “big data” channels like social media. Putting this all together, we see that while “big data” offers an incredible glimpse into global society, the continent of Africa is starkly absent and that analyses of traditional large datasets like social media capture the local events and perspectives of African countries very poorly. Yet, with a special focus on Africa and the addition of technologies like machine translation, it is possible to bring the continent back into the view of “big data” analytics. What is needed is a greater emphasis within the data sciences community on expending the effort on ensuring that the datasets we use are geographically representative of the entire world.
0197139a5acff8945bdbf1d18a006d17
https://www.forbes.com/sites/kalevleetaru/2017/02/21/visual-geocoding-a-quarter-billion-global-news-photographs-using-googles-deep-learning-api/?imm_mid=0eddcf&cmp=em-data-na-na-newsltr_20170301
Visual Geocoding A Quarter Billion Global News Photographs Using Google's Deep Learning API
Visual Geocoding A Quarter Billion Global News Photographs Using Google's Deep Learning API Google headquarters in Mountain View. (AP Photo/Marcio Jose Sanchez) Last March I wrote about an early experiment using Google’s Cloud Vision API to perform deep learning-powered geocoding of 20 million global news images. In that experiment I compiled 20 million photographs that had appeared in online news articles worldwide as monitored by the open data GDELT Project over a period of two months and ran them through Google’s Vision API service, which applies state-of-the-art deep learning algorithms to visually analyze an image much as a human would. The API returns a wealth of data about each image, including a list of objects and activities it depicts, recognizable logos, OCR text recognition in almost 80 languages, levels of violence, an estimate of how “happy” or “sad” people in the photograph appear to be and even the precise location on earth the image appears to depict. It is that last category that is so fascinating when it comes to trying to understand the visual geography of the world’s news media. One year later the GDELT Project has now processed more than a quarter billion news photographs from news outlets in almost every corner of the world through Google’s API – what can we learn through this deep learning powered “visual geocoding” of the world’s news imagery? When we talk about “geocoding” we most commonly refer to textual geocoding in which computer algorithms read through reams of textual documents, identify mentions of location and attempt to disambiguate those mentions back to a precise location on earth. For example, a news article that mentions a new education initiative being launched in “Chicago” would result, after geocoding, in the mention of “Chicago” being highlighted and an estimated centroid of latitude 41.83 and longitude -87.68 being returned, suitable for placing a dot on a map identifying its general location on the planet. Textual geocoding has become the most common application of geocoding since, as implemented by many software packages today, it can be approximated with a simple keyword search, a quick scraping of Wikipedia and rudimentary disambiguation logic (though while such approaches are extremely popular they are severely limited in their accuracy and coverage). When it comes to imagery, on the other hand, we traditionally talk about “georeferencing” in which the location depicted by the image is already known through other means (for example human analysis or GPS coordinates) and we simply apply those coordinates to situate the image within geographic space. This is because historically we have never had computer algorithms that were accurate and robust enough to take an arbitrary photograph and determine the location it depicts purely from the image itself without any external cues. Indeed, human analysts struggle tremendously with this task when asked to determine the location of arbitrary images collected from across the entire planet. The rise of scalable production-grade deep learning (neural network) approaches to image analysis over the past few years has yielded algorithms that are capable of precisely this kind of analysis in which they can take a randomly selected photograph and estimate not only the location it depicts, but the location the photographer likely stood while capturing the image. This “visual geocoding” relies exclusively on the image’s visual content and does not utilize any external information such as embedded metadata like GPS coordinates or textual image captions. One might naturally ask why visual geocoding is required when modern images typically contain a wealth of embedded information via hidden metadata fields like EXIF. The answer is that when looking globally, just 0.26% of contemporary news images contain GPS coordinates and when processing historical images, such as from library archives, none of the imagery contains such enrichments. Image captions are also far less useful than one might expect when it comes to identifying location. Photographs appearing in news coverage may not contain a caption at all (and those appearing in library archives typically do not), while for those images that do have captions, often the captions describe the contents of the image but do not offer insights as to its location, such as “Photograph of Facebook’s founder speaking at a conference” or “Aftermath of terror attack.” Captions depicting events from other parts of the world can also get the location wrong – last fall a major European newspaper mistakenly ran a photograph of the US Capitol Building captioned as the White House, while a number of images of conflict attributed to Syria and Iraq have actually featured unrest from other parts of the world, based on the street signage, dress, vegetation and other cues (or reverse Google Images searches that returned the original context of the image). Thus, to accurately map images we need to look at the image itself. Deep learning algorithms that rely exclusively at the visual contents of a photograph and determine its location based on visual assessment offer tremendous opportunities to georeference the vast firehose of imagery that captures the world each day and to do so robustly and accurately, allowing us to potentially even authenticate imagery by determining that it actually depicts the location its caption claims it does. Over the course of 2016 the GDELT Project processed more than 234 million images from online news outlets throughout the world through the Google Cloud Vision API, generating more than 1.8TB of JSON data describing them in detail. Of those quarter billion images, the Vision API flagged 3,468,424 of them (1.5%) as depicting a precise geographic location it was able to confidently identify, featuring 101,090 distinct locations on earth. Click on the image below to launch the interactive map where you can click on any location to see the first five articles featuring images depicting that location. Map of locations recognized from worldwide news imagery December 28, 2015 to February 20, 2017 using... [+] the Google Cloud Vision API (click on map to open interactive clickable map to see the locations featured) (Credit: Kalev Leetaru) One of the reasons that so few images are geocodable is that much of the world’s news imagery doesn’t feature precisely recognizable backgrounds. Rather, it tends to emphasize carefully stage crafted imagery of persons or zoom ups of scenes, such as a car crash on the side of a rural road or a burning building, that could happen anywhere in the world. A photograph of a person walking past a nondescript office building or a zoom up of a political leader standing in front of a flag-strewn backdrop offers little information for a geocoder (human or machine) to work with. Note that you will encounter a certain number of errors in the map above. Some of these can reflect algorithmic errors, but others reflect that an image was changed by a news outlet where they swapped out the original image for a new one, but kept the URL the same. There may also be some odd cases where network caches used by GDELT may have resulted in an incorrect image being passed to the Cloud Vision API. In short, if you see an odd result, try that image for yourself by drag-dropping it onto the Cloud Vision API's interactive demo page. Not every country is heavily saturated with matches. As I noted last year, African media in particular tends to emphasize imagery of people, such as political and social leaders over imagery of places, as well as often preferencing tight shots of rural scenes that do not offer sufficient background context to estimate their precise locations. To address this Google and RWTH Aachen University published a paper last year on a system called PlaNet that offers high accuracy general locality geocoding, in which indicators such as vegetation, architecture, signage and other visual cues are used to estimate the general locality of an image. These algorithms georeference an image down to a coarse grid cell or set of cells rather than a precise latitude/longitude coordinate, meaning they have far more flexibility in returning results they are more uncertain of or where they can only pinpoint an image to a general region, whereas the Vision API can only return a result if it is reasonably confident it has identified its precise position on earth. Take the example of a zoomed photo showing the entranceway of a building in Washington, DC in which there are two different buildings in the city with absolutely identical entranceways, but those are the only two buildings in the world known to have that precise entrance. The current Vision API could not tag that image because it cannot resolve which of the two buildings the image depicts, whereas locality systems like PlaNet could return that the image has a very high likelihood of depicting Washington, DC. GDELT has a heavily emphasis on local news outlets, meaning the images that have been processed here frequently reflect an expectation of local knowledge. A local newspaper in Indonesia depicting the current floods in that city is likely to capture a photograph of the flooding that is visually striking and depicts the magnitude of the flooding, but without regard to ensuring the image preserves sufficient visual landmarks to allow a reader to immediately determine the precise location it was taken. For example, an image might show a group of people wading through chest high water through a road intersection, but where the camera is facing such that only trees are visible in the background. A local might recognize there is only one intersection in the city with that precise configuration of trees, but to anyone else it would be difficult to pinpoint precisely where the image was taken. Though, as Google Street View imagery continues to expand across the world, this kind of intimate local geographic knowledge will continue to feed into the Vision API, making it possible to perform realtime ground truth triage during natural disasters to precisely identify how bad flooding is across a city in realtime. As I noted last year, “While most well-traveled people would likely be able to recognize an image of the Eiffel Tower or the Washington Monument, the map [here] shows the ability of neural networks to absorb hundreds of millions of images depicting scenes from around the world, to learn from all this imagery what daily life looks like across the planet and to use that knowledge to estimate the location of new images. Even a large team of human experts would have difficulty recognizing images of obscure buildings or remote town squares, yet plenty may be seen [here].” Creating maps like this demonstrates the incredible power of deep learning to reshape how we understand the world around us. Never before have we had technology that could be handed hundreds of millions of images and learn what the world looks like, then be handed a quarter billion new images it has never seen before and sift through them all, taking just a second per image and be able to put them on a map, visually geocoding the collection based exclusively on the scene each image depicts. Putting this all together, through the power of deep learning we have for the first time robust global-scale visual geocoding algorithms capable of sifting through hundreds of millions of photographs and placing them on a map. As this technology continues to improve and as it is augmented through locality geocoding like PlaNet, imagine what will become possible when we can catalog all of the imagery of the world’s libraries or triage the world’s live image streams to flag fires, floods, explosions or other disasters as they break out, to flag attacks on wildlife or measure the level of environmental health across the planet from droughts to trash to smog levels. Welcome to our visual future! I would like to thank Google for the use of Google Cloud resources including BigQuery and the Cloud Vision API and the Cloud Vision API team, as well as CartoDB for the use of their online mapping platform.
25bdf0b1830eb0bad07bc265f47a09d7
https://www.forbes.com/sites/kalevleetaru/2017/02/22/mapping-global-happiness-in-2016-through-a-quarter-billion-news-articles/?imm_mid=0eddcf&cmp=em-data-na-na-newsltr_20170301
Mapping Global Happiness In 2016 Through A Quarter Billion News Articles
Mapping Global Happiness In 2016 Through A Quarter Billion News Articles New Year's Eve fireworks on Sydney Harbour on January 1, 2017, bringing an end to 2016. (Don... [+] Arnold/Getty Images) Last January I explored what it looked like to literally map “global happiness in 2015” through the eyes of 200 million news articles from every corner of the globe. In short, I took 200 million news articles published from news outlets in every corner of the world from January 1 to December 31, 2015 as monitored by the open data GDELT Project and collapsed their combined 1.48 billion location mentions into a single map that recorded the average “tone” from very positive to very negative of all mentions of that location over the course of 2015 across the world’s press. In other words, the analysis looked for every mention of Paris, France in news media worldwide monitored by GDELT over the course of 2015 and averaged the tone of all of those articles together to determine how positive or negative global news coverage of Paris was. This new way of looking at global happiness proved popular and begs the question of what the same map for 2016 looks like and how much has changed in the course of an incredibly contentious year and what insights it might offer into how 2016 turned out the way it did. Using Google’s BigQuery analytics platform and Carto’s mapping platform, it takes just two lines of SQL, one block of CSS and just 30 seconds (plus another half a minute to fine tune the look and feel of the map) process the 2.2 billion mentions of location across 267 million global news articles in 65 languages monitored by GDELT last year, collapse them together into a geographic histogram that records the average emotion of all mentions of each location and render the result into a final production map. Such is the power of big data in 2017. The final result can be seen below. Immediately clear is the global prevalence of negativity, proving once again that the old saying “negative news sells” seems to be a guiding force for how the world’s media sets its daily news agenda. Events in the Middle East, from Syria and Iraq to Yemen and Turkey show up vividly. Europe offers a fascinating glimpse again this year at the impact of refugee immigration on coverage of Europe, with many of the areas showing up the most negative on the map corresponding to migration routes and refugee resettlement areas. China and Japan are intriguingly bastions of positivity for a second year running, offering an interesting topic for future research. Global happiness and sadness in 2016 as seen through the eyes of the world’s news media monitored by... [+] the GDELT Project (Click for full resolution map) India is also once again intensely and uniformly negative. A manual review of 250 randomly selected articles from GDELT’s monitoring of the country’s news outlets over the past year and an additional 250 articles monitored from the global press about the country revealed that indeed much of the coverage of the country does appear to take on a negative tint, focusing on issues like corruption, economic woes, violence against women and many other topics, a phenomenon not missed by the Indian press itself. Indeed, this topic has received considerable discussion to the point that when one searches on Google for “why is Indian media so…” Google autocompletes the query to “why is Indian media so negative.” Yet it is a map like this that brings such negativity into stark relief and demonstrates its global scale. For comparison, below is the map for 2015. Overall the map is quite similar, but there appears to have been more positivity in 2015 overall, with many of the most positive areas of 2015 turning sharply negative in 2016. In short, the happiness that still lingered in isolated pockets in 2015 was stamped out over the course of 2016. Global happiness and sadness in 2015 as seen through the eyes of the world’s news media monitored by... [+] the GDELT Project (Click for full resolution map) We simply don’t have enough data to know whether these trends reflect a world genuinely becoming more negative, a world in which there were increased numbers of negative events like conflict and strife or whether the world has remained constant while the news media has continued its search for the worst in the world to write about. Intriguingly, however, the map below shows a zoom up of average tone about the United States and Central America over 2016. While the map is fairly noisy and further statistical analysis is required, it is noteworthy that a simple visual analysis suggests that areas of high positivity are frequently centered around urban areas, while it is rural America that is most closely associated with negativity. Zoom up of global happiness and sadness in 2016 in the United States and Central America as seen... [+] through the eyes of the world’s news media monitored by the GDELT Project (Click for full resolution map) The correlation is far from perfect, but is provocative nonetheless in that it lends support to the notion of “Two Americas” in which the wealthy urban America (whose populations many opinion polls show tend to believe the country is doing well and who voted heavily for Hillary Clinton) is associated with opportunity and possibilities, while rural America (whose populations tend to believe the country is doing less well and who turned out in far greater numbers for Donald Trump) is associated with a future of limited economic potential and mobility, at least as seen through the eyes of the press. Considerable further research is required, but the clustering of high positivity around many urban areas with elevated negativity in many rural areas is at the least intriguing. Putting this all together, the power of the cloud has made it possible to not only monitor global news media and machine translate, sentiment mine and geocode it in real time, but to analyze the results with a single line of SQL in less than 30 seconds and produce a final production visualization in half a minute more. From a quarter billion articles and 2.2 billion location mentions to a final map of global happiness in under a minute – that’s the power of the cloud in 2017. I would like to thank Google for the use of Google Cloud resources including BigQuery and thank Carto for the use of their online mapping platform.
5f29fe3b3abe5f5dd23d52c09e0a05a4
https://www.forbes.com/sites/kalevleetaru/2017/02/25/what-does-artificial-intelligence-see-in-a-quarter-billion-global-news-photographs/?imm_mid=0eddcf&cmp=em-data-na-na-newsltr_20170301
What Does Artificial Intelligence See In A Quarter Billion Global News Photographs?
What Does Artificial Intelligence See In A Quarter Billion Global News Photographs? Shutterstock What would it look like to ask a deep learning AI system to watch every political television advertisement of the 2016 presidential campaign season for two months and describe what it sees? That was the question I asked last February when I collaborated with the Internet Archive to take all 267 political ads they had identified (which had aired a collective 72,807 times as monitored by the Archive) and ran them frame-by-frame through Google’s Cloud Vision API, producing what is likely the first large-scale application of production deep learning algorithms to describe the visual narratives of political advertising on television. Now, what if we took this same approach and instead of examining television, we looked at a quarter billion news photographs compiled from online news outlets in nearly every country of the world over the course of 2016? What would AI see in that vast archive of the visual narratives of the world’s media? Google’s Cloud Vision API is a commercial cloud service that accepts as input any arbitrary photograph and uses deep learning algorithms to catalog a wealth of data about each image, including a list of objects and activities it depicts, recognizable logos, OCR text recognition in almost 80 languages, levels of violence, an estimate of visual sentiment and even the precise location on earth the image appears to depict. No human or even team of humans could reasonably catalog a quarter billion photographs at such detail, spanning global events from every corner of the globe. Thus, nearly all work on cataloging and understanding news coverage has focused on textual analysis, looking at the words on the page, while discarding all of the rich visual ground truth information captured in the photographs appearing alongside those words. Deep learning algorithms for the first time allow us to bring the rich visual tapestry of the world’s news imagery into the analytic fold, cataloging and comprehending visual narratives as easily as we do textual ones. To demonstrate this, more than 234 million photographs monitored by the open data GDELT Project from news outlets in nearly every corner of the globe were processed through Google’s Cloud Vision API over the course of 2016, yielding more than 1.8TB of JSON describing their contents in exquisite detail. Earlier this week I explored the geography of these images, using the Cloud Vision API’s ability to perform “visual geocoding” in which it visually examines the background of an image to estimate the precise geographic location it depicts. Yet, perhaps the most powerful aspect of deep learning image cataloging is the collection of object and activity labels the algorithms return that describe what they see when they look at the image – in short, answering the question “what does this image show?” In total, the Vision API applied 9,853 unique labels to the images, with the most popular being “person” (27% of images), “profession” (14%), “vehicle” (10%), “sports” (7%), “speech” (6%), and “people” (5%). GDELT has a specific deemphasis on sports and entertainment coverage and attempts to filter out as much of this content as possible, except for cases where such events transcend into the political or social spheres, such as a European football match that devolves into fan violence. Thus, while sports still ranks highly here, its representation in this sample is far less than one would find when processing the entirety of all online news imagery. The Vision API appears to apply the “person” label primarily in cases where a single person or a small number of people are the primary object of the photograph, such as a speaker standing at a podium. Images depicting large numbers of people, such as crowd shots, tend not to carry this label and instead have their own labels like “crowd” (1.8% of images) or “protest” (1.02% of images). The prominence of imagery depicting a single person or a small group of people reinforces how important people are to news imagery. Even when photographing a breaking news event like a burning building, photographers will typically frame their shot to capture the people affected by the fire, the firefighters trying to put it out, the crowd of pedestrians gathering to watch it, etc. A photograph of a new technology invention will typically feature the inventor(s) proudly holding or standing beside their innovation, rather than merely a photograph of the device by itself. In short, news imagery centers on people and their relationship to the world around them. In total, the API counted a total of 103,549,940 human faces, working out to around one face for every two images. The API is not designed to count faces in a large crowd, so typically will only count cases where the person is facing towards the camera and occupies a reasonable portion of the frame. For privacy reasons the Cloud Vision API cannot perform any kind of facial recognition and thus can only count how many human faces are in an image, but cannot tell you that one of those faces is President Trump. How much does this emphasis on people differ throughout the world? The map below colors each country by the density of human faces in all imagery monitored by GDELT from news media in that country - ie, the total number of recognized human faces in all images from that country is divided by the total count of all images from that country. The scale ranges from yellow (low density) to dark red (high density). Africa, the northern tip of South America and Central America all heavily emphasize people in their news imagery, while countries like Russia and China appear to feature people less often. These numbers are fairly similar to the December 28, 2015 – January 11, 2016 map based on a much smaller sample of just 1.4 million images. [caption photocredit=""] Density of human faces in news imagery by country monitored by GDELT over 2016 and processed using the Google Cloud Vision API[/caption] Looking beyond people, while they tend to capture the public interest and become iconic imagery, photographs featuring police constitute only 1.05% of images, just above military at 1.04%. Both appeared below “supermodel” at 1.1%. The 78th most popular label, applied to 1.4% of photographs is “geological phenomenon” which encapsulates things like earthquakes and destruction (sometimes also manmade destruction like military airstrikes which can cause the same kind of devastation). Damage and destruction are therefore prominent, but not dominate themes of the world’s news imagery. In all, those 9,853 distinct labels were applied a total of 1,390,108,064 times across the collection, working out to an average of 6 labels per image. This reflects the incredible range of imagery monitored by GDELT from across the world, from relatively simplistic imagery like a political leader standing at a podium in front of a blank wall to the aftermath of a terror attack with a myriad activities and objects in the scene. It is noteworthy that beyond “person” there are few other labels that dominate the collection. This emphasizes just how diverse the world’s news imagery is and how many different topics are depicted on a given day. It also reinforces why only deep learning systems with large numbers of category labels like Google's Cloud Vision API are sufficient to work with news imagery – a more simplistic system designed to recognize just a few classes of imagery would struggle to provide much utility when applied to the incredible diversity of the world’s news imagery. The Cloud Vision API can also run Google’s SafeSearch algorithms over each image to determine the likelihood that it depicts graphic violence. In all, 463,864 images were flagged as being potentially violent, working out to around 0.20% of all photographs. This reflects the fact that most news media outlets around the world generally avoid printing uncensored images depicting truly graphic violence. While even American outlets will readily reproduce images of the aftermath of a terror attack, typically the angle of the photograph is selected to avoid showing recognizable bodies or the image is pixelated or portions are blacked out. The map below colors each country by the percent of all images monitored from that country’s press that the Cloud Vision API determined had a reasonable likelihood of depicting graphic violence. Similar to the December 28, 2015 – January 11, 2016 prototype map based on just 1.4 million images, this one shows the highest density of violent imagery occurring in the media of Africa and the Middle East, as well as increased levels of violence in Mexican news photography. [caption photocredit=""] Density of violence in news imagery by country monitored by GDELT over 2016 and processed using the Google Cloud Vision API[/caption] Of course, once we have a quarter billion labeled images, we can do more than simply count how many images have a given tag. By looking at how those labels co-occur we can start gaining an understanding of visual context and how different visual metaphors are combined and contextualized across the world. In all, there were a total of 3,063,913 distinct combinations of labels and a combined 5,435,780,762 total co-occurrences. The visualization below shows the top 10,000 most common label co-occurrences showing the natural clustering of visual topics across the collection. Nodes represent individual labels and the edges connecting them reflect labels that frequently co-occur. Modularity was used to group the labels into clusters, reflecting groups of labels that co-occur more commonly with each other than with others. Colors were assigned randomly to each cluster – all nodes of a particular color indicate a given cluster, but the specific color (green, yellow, pink, etc) does not indicate anything. Click on the image below to view the full resolution version, or access an interactive version that allows you to zoom in to see all of the node labels and also allows you to click on any node to see the nodes it directly co-occurs most commonly with (note that the interactive version is flipped vertically). [caption photocredit=""] Co-occurrences of labels assigned by Cloud Vision API to quarter billion global news images (click for full resolution version or access interactive version)[/caption] Faintly visible in the lower left is the yellow “animal” cluster that includes dogs, cats, horses, etc. This is most tightly connected to the green “artistic” cluster that reflects everything from fashion photography to tight shots of an individual or small group of people. This makes sense in that photographs of animals in the news often portray them in the context of how they relate to people, such as in the role of pets or on a farm. The blue “professions/activities” cluster captures a combination of professional roles like “business executive” and “preacher” with activities like “news conference” and “protest.” Orange captures sports imagery (while GDELT attempts to exclude as much sports content as possible, sporting events are often front-page news with significant societal interest and impact and thus may be processed by GDELT). Sports imagery is relatively isolated, reflecting that it represents a unique genre of news coverage distinct from general news. Pink nodes reflect situating location such as “commercial building” or “farmhouse.” Purple nodes reflect vehicles of all kinds, ranging from “jet aircraft” to “yacht” to “BMW,” while a secondary extended cluster reflects the aviation industry specifically. While semantic topical clustering is commonplace for textual analysis, images are more commonly clustered by visual similarity, grouping together images that look similar, rather than images that may look entirely dissimilar, but depict similar objects and activities. In short, once we’ve used the Cloud Vision API to catalog an image, we can use those assigned labels as traditional topical tags to semantically cluster images much as one would a collection of textual documents that have been enriched with topical tags. At present the clusters presented here capture a hybrid blending of visual contextualization and the taxonomy employed by the Cloud Vision API. For example, every image tagged as displaying a BMW will also be tagged as containing a “vehicle” and specifically a “luxury vehicle.” This reflects that while presented as a flat collection of tags, the Cloud Vision API’s label database is actually derived from a hierarchical knowledge taxonomy in which children imply their parents. This makes it possible to query images at different levels of semantic resolution – an image of a BMW can be accessed through a specific query for BMW vehicles, or a query for luxury vehicles, which would also return many other brands of car or a generic query for all vehicles, which will return many different kinds of conveyances. This prototype analysis demonstrates the tremendous potential of visual semantic clustering and future iterations will subtract the structure hierarchy from the final network to resolve the organic clusters more precisely, as well as to cluster at different levels of semantic resolution, from the most specific tag resolution like “BMW” to root level tags like “vehicle” to explore how this affects the kinds of visual contextualization we are able to uncover. The raw output of the Cloud Vision API actually includes the unique knowledge graph identifier of each label that can be used to understand its structural relationship and comparing these structural and organic network organizations should reveal even more powerful glimpses into visual narration. Putting this all together, deep learning image recognition systems like Google’s Cloud Vision API offer us for the first time the ability to move beyond the limits of text and to tractably explore the visual narratives of the world’s news imagery. I would like to thank Google for the use of Google Cloud resources including BigQuery and the Cloud Vision API and the Cloud Vision API team, as well as Carto for use of its mapping platform.
db27a67a9a6e7849d367c42646b74eeb
https://www.forbes.com/sites/kalevleetaru/2017/02/27/creating-a-massive-network-visualization-of-the-global-news-landscape-who-links-to-whom/?imm_mid=0eddcf&cmp=em-data-na-na-newsltr_20170301
Creating A Massive Network Visualization Of The Global News Landscape: Who Links To Whom?
Creating A Massive Network Visualization Of The Global News Landscape: Who Links To Whom? Shutterstock What would it look like to take a portion of all online news coverage published worldwide last year, extract all of the hyperlinks from all of those articles and create a massive network visualization that captures the global news landscape? What could it tell us about how language barriers, physical geography, cultural and economic ties and other characteristics affect which news outlets link to which others? What can we understand about the global flow of information from examining how news outlets themselves see each other? News outlets have long referenced one another both to critique and to credit. When a major news outlet breaks a big story, the threads of that story will crash like a wave over the news ecosystem, with subsequent outlets either summarizing the story for their distinct audiences or expanding the story through new sources and angles. Downstream outlets will typically give credit to the originating outlet by mentioning who first broke the story. In the print era this meant a simple statement like “as reported in the New York Times this morning,” while in the digital era news outlets will increasingly include a hyperlink back to the original story, as well as potentially linking to competing news outlets that expanded on the story in ways relevant to the current story. In short, news websites are increasingly like the rest of the web, linking to other websites and contributing to the global hyperlink graph connecting the web. This is a powerful development in our attempts to map the global news landscape because such hyperlink graphs are easily extracted and processed using network analytic techniques that allow us to understand the natural communities of news outlets and the influencers that drive our global news narratives. The open data GDELT Project monitors global print, broadcast and online news media from across the world in 65 languages and makes available a master list of more than half a billion articles it has monitored over the last two years. For online news articles, GDELT visually renders each page and performs a human-like visual assessment of the final page layout to identify the actual core article content and separate it from the rest of the page - a technique known as visual document extraction. Beginning last April, GDELT began using this visual document extraction to record a list of all hyperlinks it finds in the body of each online article. This list includes only those links found in the article text itself, while ignoring links from the rest of the page like the headers/footers/navigation bars/etc, making it possible to study article linking practices rather than merely blindly looking at every hyperlink on the page. Using this data it is possible to rapidly construct a link graph that connects each news site with all of the sites its articles have linked to. Over the past 10 months GDELT has monitored 121 million online news articles that it identified as containing hyperlinks in the article body text. Together these articles collectively contained 727,275,917 links and resulted in 13,380,807 distinct pairings of sites (as defined as one site having one or more links to another site). Some news outlets, especially financial publications, regularly convert every mention of a publicly traded company in their articles into a link to its latest ticker data or make heavy editorial use of cross-referencing, while others include links in articles only sparingly. As with all automated algorithms, there will also be a certain level of error in the visual document extraction and link identification engines and no system, no matter how extensive, can monitor the absolute entirety of all news coverage worldwide each day. This means it is impossible to examine 100% of worldwide online news, but we can look at a strong cross-section of that content, drawn from almost every country in the world. One intriguing observation right off the bat is the prevalence of links to Amazon.com across major Western news outlets. It appears that Amazon has essentially become the defacto homepage for the world’s books and consumer products. A news article about a newly elected official might mention that he or she previously wrote a book and include a link not to the publisher’s website for that book, but to the Amazon entry for the book. Similarly, a news report that cites a book as background reading or reference for a given factoid will again typically link to its Amazon entry, rather than to its Google Books or publisher page. Indeed, mentions of books most commonly linked to their Amazon entries, as did discussions of many new products for sale. Of course, the purpose of using a network modality to explore news is that it allows us to move beyond simple outlet-by-outlet bulleted lists towards visualizing the macro-level structure of the global landscape. However, visualizing three quarters of a billion links across 121 million articles by simply rendering them all into a single image would result in a completely incomprehensible spaghetti-like tangle of lines. In reality we're less interested in which articles cited which articles and instead in which outlets cited which outlets. Thus, the graph was collapsed to the resolution of domain names, recording for example how many times “washingtonpost.com” was linked to across all articles published on “nytimes.com” monitored by GDELT over the last 10 months. Even this filter was not sufficient because of the sheer magnitude of non-news websites that are linked to by news sites. These range from global websites like social media platforms like Facebook and Twitter and ecommerce sites like Amazon to myriad smaller sites like a personal homepage, a university research group’s blog or a small NGO’s website relevant to a given article. While capturing all of these links can reveal fascinating insights into the sourcing behavior of the world’s news media, the immense number of such sites means that the resulting graph will be far too dense to reveal much. Since our focus here is on the linking structure among the news community, we are primarily interested in how often news outlets link to other news outlets, rather than their links to arbitrary pages on the open web. To offer a reasonable approximation of the online news landscape, those 727 million outlinks were filtered against a version of the Google News source catalog over the same 10-month period to restrict the analysis to just links to news outlets monitored by Google News (the version used removes many sports and entertainment websites which are outside the scope of GDELT’s focus on global events). While Google News is by no means an exhaustive compilation of all online media outlets worldwide, its source catalog is extensive enough that it at the least captures a reasonable cross-section of the online media landscape such that it serves as a useful filter for the outlink graph. The visualization below shows the final network visualization of the top 10,000 strongest connections in this graph. Each node represents a news outlet and the lines between them represent pairs of outlets in which one or both of the outlets linked frequently to the other. Groups of outlets that linked to each other more frequently than to other outlets are clustered together and assigned the same color (though the color itself was assigned randomly such that the color green does not mean anything other than all green outlets belong to the same cluster). The PageRank of each outlet was also computed and used to size the nodes, such that news outlets with a higher PageRank (more “globally important”) are drawn as larger dots. Click on the image below to view the full resolution version or access an interactive version that allows you to zoom in to see all of the node labels and also allows you to click on any news outlet to see the outlets it is most closely connected through interlinking to. Network visualization of the top 10,000 strongest connections between global online news websites by... [+] number of article hyperlinks as monitored by the GDELT Project April 21, 2016 to February 25, 2017 (click for full resolution version or access interactive version) Immediately clear from the graph above is that there is a central cluster in which many high-PageRank outlets are all heavily interlinked. This represents the sort of "global media core" of outlets that are at the center of global news links. In the green cluster are large American outlets like the Washington Post, New York Times and CNN. In short – major news outlets tend to cite each other regularly and with far more frequency than they link to small local outlets on the other side of the world, while those small outlets are more likely to link to these major outlets than to other small outlets. While this observation might on the one hand seem to be common sense, what is intriguing here is the role each of these outlets plays as a bridge to other clusters. CNN appears to bridge in a collection of small American local news outlets (the small cluster of light blue outlets to the right of the green cluster). A number of these links are those small outlets linking heavily to CNN, choosing to cite its coverage, rather than coverage from the Times or Post. Why smaller news outlets might choose CNN over other outlets remains to be seen and a fascinating thread of research would be why we see the sourcing patterns that we do – whether paywalls play a role (CNN is not paywalled while both the Times and Post use metering and paywalls) or perhaps writing style. NPR’s main site npr.org brings together many of the local NPR outlets from across the US. Many of these local NPR stations also link heavily to the Washington Post and New York Times in addition to their connection back to the main NPR site. The left half of the main supercluster fades from green to orange and blue, suggesting that while heavily interconnected, these outlets are distinct enough to warrant their own separate clusters. The orange cluster contains outlets like Yahoo News, the Wall Street Journal, Wired, Bloomberg, Ars Technica, Variety and Hollywood Reporter. It appears to be a mixture of specialty outlets that cover specific topical fields like technology, finance, Hollywood, fashion, etc, as opposed to the general outlets of the green cluster. This reflects that these kinds of topical outlets tend to be cited heavily when it comes to stories in their focal area, but not otherwise. The blue outlets are largely UK-based outlets like The Guardian, The Telegraph, The Economist, The Week UK, BBC’s UK edition, but also the World Bank, the UN, Quartz and Vice. Looking at the kinds of outlets that most heavily link to and from the World Bank it is a mixture of outlets like The Economist, New York Times, Washington Post, BBC, Foreign Policy, Bloomberg and The Guardian, reflecting a distinct cross-section of the media environment. Within this cluster are smaller clusters like Gizmodo’s US, UK, India and Australian outlets all heavily crosslinking to each other, reflecting the increasing trend among some outlets to create internationalized editions for different markets. The dual pink and blue clusters towards the bottom of the network graph are quite interesting in that they represent the Ukrainian press (light blue) and Russian press (pink and the small tight cluster of darker blue nodes within). The two nations’ presses primarily link to outlets in their own respective countries, but also frequently link to outlets in the other country. On the Russian side, bridging outlets include RIA, Interfax and TASS. Connecting the Russian and Ukrainian presses to the rest of the global press are three primary bridges – BBC.com (which connects more heavily to the Ukrainian side), Reuters (which brings in the smaller Russian subcluster of dark blue outlets) and AP (which is primarily connected to that small Russian subcluster). The small yellow-green cluster to the lower right of the main cluster reflects largely Arabic outlets, with media like 24.com.eg acting as local bridges. The Chinese press also maintains its own cluster out on the left side of the main cluster. Here some interesting subclustering is visible, with Xinhua anchoring a cluster of general topic outlets, while subclusters bring together some of the media families like gmw.cn, xinmin.cn and hebnews.cn, each of which runs a family of different websites that are heavily interlinked to each other. As a general rule, the presses of each non-English country tend to form isolated clusters heavily internally connected, but with a small number of bridging outlets that connect them back to the rest of the world. Why might this be? One possibility is that an American technology outlet reporting on coverage from an Egyptian Arabic newspaper might assume that most of its American readers cannot read Arabic and thus it would do better to simply summarize the Arabic story rather than to provide a link referring its readers to it, though in general it is proper practice to provide a citation even to articles in other languages to provide attribution. More likely, it reflects that relatively few American reporters are fluent in Arabic and thus are largely unaware of what’s happening in the Arabic speaking world, even while they may voraciously consume English language press from around the world to stay on top of global announcements in their field. In short – language barriers mean that news media in one language are more likely to reference coverage in other countries that speak the same language than they are to link to outlets publishing in fully disjoint languages. This has immense implications for what we see about the world around us when we turn to the news and in turn how strongly we are able to relate to parts of the world that we rarely hear about. However, language barriers are not the only factor at play. While the UK press is largely interlinked with the American press, Australian news outlets form their own disjoint cluster, with The Sydney Morning Herald and The Age connecting them back to the main cluster. This suggests that geographical proximity and trade and other ties also play prominent roles in news linking patterns. The graph here looks across all 121 million articles monitored by the GDELT Project over the last 10 months that were identified as containing one or more outlinks in the article body. It is likely that if those articles were filtered to specific topics, such as finance, technology, conflict, innovation, etc, that very different graphs might emerge that more clearly capture the community structure in particular news genres. A technology outlet might not ordinarily find itself citing a fashion outlet on a daily basis, but when it comes to the increasing blend of technology+fashion, are there particular pairings that stand out? Future iterations of this analysis will focus on examining individual subgenres and zooming into the press of a single nation or language to tease apart more and more micro views of the global news landscape. It is important to recognize that the graph above reflects only the top 10,000 strongest connections from a network that reflects only articles and links captured by the GDELT Project connecting to or from domains in a portion of the Google News source catalog. It also reflects a view of connectivity proxied through interlinking of news outlets, while linking standards may vary across the world and individual outlets may have policies that encourage or discourage crosslinks. Its use of hyperlinks further means it can only examine online news outlets and thus ignores the print and broadcast presses that are more influential in parts of the world. In this respect, this network is by no means an exhaustive catalog over the entirety of the global news landscape from the past 10 months, but rather offers a tantalizing glimpse into the kinds of new insights we can gather when we start taking techniques like link analysis and applying them to new fields (like media landscapes) through the power of massive datasets. Putting this all together, through the power of modern big data we were able to take 121 million articles spanning 65 languages and nearly every country in the world and collectively containing more than three quarters of a billion outlinks and compile that massive dataset into a single network visualization that captures a cross-section of how the world’s news outlets link to each other and in the process gain a completely new glimpse of the global news landscape. I would like to thank Google for the use of Google Cloud resources including BigQuery used to conduct this analysis.
1eb0e362349ed6caaadee1e1f049446b
https://www.forbes.com/sites/kalevleetaru/2017/03/17/recapping-google-next-17-the-great-cloud-shift-from-renting-hardware-to-services-and-experts/
Recapping Google Next '17: The Great Cloud Shift From Renting Hardware To Services And Experts
Recapping Google Next '17: The Great Cloud Shift From Renting Hardware To Services And Experts Google CEO Sundar Pichai speaks during Cloud Next '17 last week. (David Paul Morris/Bloomberg) I spent most of last week at Google’s Next ‘17 cloud conference, where I had the honor of being among a small group invited to its inaugural Community Summit the day before, which brought together “cloud insiders and developers alongside Google Cloud's engineering and executive leadership, plotting the future of cloud, un-conference style.” Coupled with the myriad conversations I had on the sidelines of the main conference throughout the week both with Google and its customers and conference attendees, it was fascinating to see just how far Google’s cloud offerings have come over the past decade. If I had to sum up the greatest single theme of the conference, it was the totality of the transformation of today’s cloud from simply renting hardware to offering fully managed turnkey services that complete the abstraction away from the underlying hardware. This, in turn, has been enabled by Google’s own transformation from an inwardly-focused technology company to an outwardly focused cloud enterprise that is increasingly externalizing its software tools and experts to help companies absolutely maximize the power of the cloud to transform how they do business. In short, after nearly 20 years of building the massive global hardware and software infrastructure needed to run one of the world’s largest internet companies, Google has begun opening up the ways it uses that infrastructure to the rest of the world, packaging its lessons learned in the form of turnkey software services and expert assistance. I first experienced the “cloud” 22 years ago when I launched my first web startup in 8th grade, the year after Mosaic debuted. To house the web crawlers, data mining tools and other systems for my startup I turned to a professional hosting company where I rented a remote managed UNIX server. The physical hardware was fully managed and located in some distant data center with 24/7 system administration and support. Much like today’s cloud I had no visibility into the underlying physical hardware or data center architecture hosting my server. Nor did I need to: somehow, magically, when a hard drive crashed or a power supply or motherboard failed, after a relatively short delay my server magically came back to life, having migrated somewhere else in the data center. How that happened was not my concern - whether it involved virtualization, a hard drive being physically walked to a new machine or some other magic, all I needed to know was that that server would shortly come back to life and it was someone else’s problem to deal with all of the hardware issues. Nearly a quarter century later, not much has changed in the basic cloud world. We still rent remote servers running in some distant data center that are fully abstracted from the underlying hardware and where someone else handles all of the hardware management. We pay our hourly or monthly rental fee and have a login to our remote server to make it do whatever we need. Today’s cloud is vastly faster, infinitely more scalable and offers powerful features like live migration, but at the end of the day a time traveler from a quarter century ago would not feel that far from home. At Next, this came through as a common theme in the keynotes and talks – merely renting virtual machines is no longer a differentiator in today’s cloud (something reinforced in the conversations I had with countless attendees over the course of the conference). Rather, it is the services that companies offer on top of those machines that have become the driving force of the modern cloud, completing the abstraction away from discrete computers to the tasks those computers can perform. In Google’s case this means leveraging its incredibly unique infrastructure investments like its Colossus storage fabric (allowing a single task to access thousands of dedicated disks at a time), Borg compute management (managing hundreds of thousands of processors) and Jupiter network (offering petabit bisection bandwidth) to create services like BigQuery. Indeed, BigQuery was one of the stars of Next, coming up again and again during the keynotes and throughout the customer talks and in conversations with attendees throughout the week. Several large customers referred to it as a sort of gateway to Google Cloud, where companies begin migrating key workloads to BigQuery and then over time migrate the rest of their data infrastructure over as they see the power of the commercial cloud. In many ways BigQuery is the perfect demonstration of one of the world’s most powerful computing infrastructures. It brings together each of Google’s supreme hardware strengths to build an analytics platform that simply could not be built anywhere else, capable of table scanning an entire petabyte in less than 4 minutes. In doing so, it offers not only a preview for companies of cloud done right, but also a blueprint for how to not just forklift the corporate data center into the cloud, but how to leverage the uniqueness of the advanced cloud to do things it cloud never have dreamed doing itself. Google’s former CEO (and now Executive Chairman of Alphabet) Eric Schmidt announced during Next that Google had spent more than $30 billion building its global cloud infrastructure, offering together with the other keynotes a glimpse at what is perhaps the largest private computing system in the world, including Google’s laying of its own transocean cable network to better connect its global data center footprint. Few non-cloud companies in the world can boast such infrastructure spending or hire the caliber of talent that builds these systems for Google. Google’s unique talent pool and the innovations they enable were front and center at Next. A common theme among many of the large customer keynotes and talks was that you are no longer just renting machines or subscribing to technical services – one of the most valuable and unique benefits they get from partnering with Google is access to Google’s world class engineers, many of whom literally “wrote the book” on their respective fields or personally invented key cloud technologies. A number of customers talked about how when migrating or reinventing their core applications or during high-profile scaling operations, they used Google’s enterprise offerings to have Google engineers work hand-in-hand with their own staff to ensure flawless and perfectly optimized execution. To paraphrase how one customer put it, they don’t buy their own fleet of cement trucks and have their in-house staff build their stores – they hire professional architects and construction companies who specialize in that to build their stores or they lease their stores, leaving them to focus on their core business, so why should they build their own data centers, purchase massive clusters of machines and try to run a captive cloud subsidiary when that’s not their business focus? In short, they bring in outside experts to build or lease their stores, so why haven’t they done the same for their data centers? Another customer noted that they simply could never afford the salaries of “Google class” engineers and that even if they could, those engineers would quickly become bored and leave. By hiring Google’s expertise, not only do they get the experience and talents of the top engineers in the field, but those engineers can draw from across the entirety of Google, rapidly forming troubleshooting or solution teams that combine expertise from a myriad of disciplines. Google appears to be approaching this from two fronts: delivering prebuilt turnkey solutions like BigQuery and Cloud Spanner and various API offerings, ranging from Cloud Vision, Speech, Natural Language, Video and Translation APIs to traditional productivity software like G Suite and offering services that help customers maximize their use of these tools. On the deep learning front, in the past if a company wanted to integrate neural networks into its work, they had to compete for a very limited pool of top talent and have the specialized hardware clusters filled to the brim with GPUs and hardware accelerators to train and run those models. Today Google offers all of that specialized hardware via its cloud and workflow services like Cloud ML to handle the heavy lifting – just drop your data or model in and Google handles the rest. For companies that don’t want to hire their own deep learning teams and just want something that works right out of the box, Google is offering an ever-growing stable of complete ready-to-go pretrained and optimized APIs for everything from image and video recognition to speech transcription and translation. Want to stream live speech in a foreign language, transcribe it and translate it into English? Just a few lines of code is all that’s needed to connect the relevant prebuilt APIs and in just a few minutes you have a production solution that can instantly scale from casual use on through millions of requests per second without a single modification. Towards this end, one of the great limitations of the cloud to date has been that historically all it did was alleviate the need to manage and maintenance hardware. You no longer had to worry about swapping out a failed hard drive or sending staff to replace a blown power supply at 2AM and you could rapidly add and remove hardware to deal with fluctuations in demand. But, at the end of the day all you got was a bunch of SSH logins to a pile of remote machines that you still had to administer. Virtual machines only remove the hardware side of the equation – OS patches, software and library upgrades, application management, compatibility issues, security, scaling – these are all still your problem. One of the themes of Next was how Google is focusing on this problem by both externalizing its own managed internal services it has built over two decades to solve the biggest data problems in the world and in increasingly making its engineers available to bring their expertise to others. In the past Google hired the best engineers in the world and put them to work building its own services like Google Search, Gmail and Google News and the underlying systems that made those services possible like Dremel, Colossus, Borg and Jupiter. Today those same engineers are increasingly being made available to other companies to help them leverage Google Cloud at the same scale. In the earliest days of Google Cloud it essentially opened the doors to its data centers, renting out those underlying systems so that a company could, for example, make use of Colossus by way of Google Cloud Storage to host files with unlimited scalability. The focus at this stage was on the data center, with users having to build their own applications to leverage that hardware. As Google Cloud has evolved over the past decade, it has moved beyond the data center to start opening the doors to its engineering staff and the incredible tools they’ve built to leverage that hardware. Tools like Dremel have gone from strictly internal use to externalized offerings that have become gateways to the cloud transition. Google’s immense investment in deep learning for its own use is increasingly being packaged into public APIs powering business around the world. In each of these cases the underlying software stack is joining the hardware stack in being abstracted away. With BigQuery you simply plow your petabytes of data in and run queries in standard SQL. There are no machines to manage, no capacity planning or scaling optimizations – it all “just works.” Google Cloud Spanner offers just a single input box to tell it how many nodes you want to use – everything else is abstracted away. When a system is massively upgraded to a completely new software version, that all happens magically in the background by Google’s engineers – all you know is that one day suddenly your queries run a lot faster. Putting this all together, one of the biggest takeaways from Google Next was that after years of opening its data centers up, Google is increasingly focusing on opening up its engineering centers, bringing its internal tools and the immense talent pool it has built up over two decades at the forefront of the technology revolution and applying those to the grand challenges of the enterprise. Companies aren’t just renting hardware anymore, they are essentially “Googlefying” themselves – shifting to using not just the same infrastructure that powers Google itself, but the tools Google has built to perfectly leverage that infrastructure, all while blending their own IT staffs with Google’s top experts. This is the future of cloud.
b98496b07b5c096246501502fa4c4a88
https://www.forbes.com/sites/kalevleetaru/2017/07/19/the-sad-drowning-of-steve-the-robot-and-the-future-of-robotic-rights/
The Sad Drowning Of Steve The Robot And The Future Of Robotic Rights
The Sad Drowning Of Steve The Robot And The Future Of Robotic Rights The chairman and chief executive officer of Knightscope Inc. speaks at an industry conference... [+] alongside one of the company's robots. On Monday a robotic security guard here in Washington was found drowned in a fountain at his workplace after apparently failing to see a set of stairs leading down to it and falling to his watery demise. In this case, no foul play is suspected and it is believed the Knightscope robot simply failed to see the stairs, but the incident raises fascinating questions about robotic rights and the future of our mechanized overlords. I actually met Steve, as the robot was affectionately named, for the first time on my nightly walk through Georgetown just a few hours before his death. He was sitting happily on what appeared to be a charging mat just around the corner from his ultimate resting place, all lit up in his blue LED lights, the number 42 stamped on the bottom of his case and the words “Designed and built in Silicon Valley” proudly emblazoned on his chest. A happier moment - Steve the robot just hours before his death (Credit: Kalev Leetaru) Kalev Leetaru How did he end up just a brief walk away face down in a fountain? Media reports suggest his sensors simply missed the stairs leading down to the fountain and he just drove right into the water. The steps in question are quite easy to miss if one isn’t paying attention, as the main walkway dead ends right into the fountain without so much as a railing or raised indicator that peril awaits directly ahead. Indeed, I once watched a young gentleman focused intently on his phone plunge headfirst into the fountain as he strolled down the walkway, so it’s quite conceivable that poor Steve could have failed to look down and realize the watery grave that awaited him. Today Steve is little more than a sensor array on wheels, but imagine a future in which deep learning advances to the point that security robots like Steve become self-aware. What happens if such a self-aware robot misses a step and falls into a fountain and shorts out, either becoming injured or losing his unique neural network and essentially “dying?” Could he or his surviving (robotic) relatives sue his manufacturer for failing to provide him the sensors he needed to properly navigate such a complex setting? As an employee could he sue for workplace hazards or failing to properly protect visitors from a pedestrian hazard? On the other hand, what if his untimely demise was due not to an accident, but to a human pushing him in? What are a robot’s rights when a human causes them injury or even death? It is easy to imagine security robots increasingly finding application in public spaces dealing with the inevitable parade of drunken individuals emerging after the bars close who cause problems from public urination to vandalism. Such robots could act as deterrents to such behavior or simply film the acts as evidence. Yet, in doing so they would likely themselves become targets of aggression. Eventually, these robotic guards would likely need to be able to defend themselves. A burglar finding themselves confronted with a human security guard knows if they injure or kill the guard they will be facing far greater legal repercussions than if they simply surrender. On the other hand, a burglar whose path to freedom is blocked by a large robot may simply choose to destroy said robot, resting comfortably in the knowledge that such an act at the worst would be classified as damage to private property rather than assault or murder. On a more routine basis, it is likely that a group of drunken revelers finding a robot blocking their path or filming them in illegal acts might decide to silence their witness, even if it informed them that it was live streaming their acts. What rights does a self-aware silicon intelligence have against the murderous aggressions of a carbon life form? Conversely, happens when such robots are inevitably provided some means of defending themselves? Heavier armor and sirens might not be enough, especially as such systems are deployed to protect high-value installations like banks, critical infrastructure facilities and military installations. Imagine a taser-equipped robot who subdues someone attempting to break into the bank it is protecting and, due to a health issue, the person succumbs to cardiac arrest. Who is responsible for the person’s death? The robot? The robot’s manufacturer? The programmer(s) who trained the neural network that made the fateful decision? The bank which deployed the robot? The ethical debates around the rights of intelligent robots are playing out in academic, governmental and commercial forums all across the world, but aside from driverless cars, there are few situations today where the average person actually interacts with autonomous robots in their daily lives and so many of these conversations have been largely future-looking theoretical conversations. Yet, Steve’s unfortunate watery demise earlier this week is a reminder that our robotic future is coming a lot faster than we sometimes realize. Intelligent robots have already begun to walk our streets, from delivery bots to security guards and as these systems become more and more intelligent, the next time a Steve suffers a misadventure, perhaps he’ll shout “see you in court” before his waterlogged circuits short out. Gallery: Robots Built To Save Your Life 16 images View gallery
a9260becad828a5b79d17f3e57f17268
https://www.forbes.com/sites/kalevleetaru/2018/02/20/government-cybersecurity-through-obscurity-and-paying-attention-to-data-lifecycles/
Government Cybersecurity Through Obscurity And Paying Attention To Data Lifecycles
Government Cybersecurity Through Obscurity And Paying Attention To Data Lifecycles Data security. (Thomas Trutschel/Photothek via Getty Images) While perhaps best known for funding academic research, the US National Science Foundation (NSF) conducts many other activities, including an annual survey of doctoral graduates called the Survey of Earned Doctorates (SED). While an important data source for understanding the societal impact of doctoral education, the way in which the NSF conducts its survey offers a case study in cybersecurity through obscurity, the importance of paying attention to the entire lifecycle of data and several useful lessons to other organizations managing sensitive data in 2018. My own experience with the SED began last month when I received four phone calls in one month from an unknown phone number late at night claiming to be a survey company working for NSF and wanting to ask me a series of questions. In this era of constant phishing attempts and scam calls, I initially assumed the calls were phishing efforts, since any NSF survey would surely be conducted from a listed phone number (though such numbers can be easily spoofed) and that the caller would have sufficient identifying information to authenticate themselves and that they actually were working on behalf of NSF. Instead, the caller said they had no information about me other than my name, phone number and the university I graduated from and wished me to provide them a cornucopia of sensitive information of the exact kind coveted by identity thieves. Specifically, the survey the caller wished me to provide detailed information on gender, race, marital status, financial status and sources of funding during my student career, date of birth, citizenship status, locations of all schools I attended including high school, disabilities, the last four digits of my social security number, my current full mailing address and the full name and address of “a person who is likely to know where you can be reached in case your address changes in the near future,” likely to be a parent. With even a fraction of this information, the caller has a wealth of details to conduct all sorts of identity theft and impersonation activities. More troubling, recent doctoral graduates are freshly out of an academic system in which they are often accustomed to strange staff or administrators routinely requesting sensitive information, which during the graduation process can involve a myriad staff requesting all sorts of details in rapid succession. Graduates might therefore not be as suspicious about a call purporting to be from the NSF asking them for similar sensitive details. When asked for some kind of authenticating information that would allow me to verify that the call really was from the NSF, the caller was unable to provide any kind of verification information, other than to say that I should have received a paper packet in the mail, which I had not. I was also given a web URL to access the survey via the web and the contact details of an NSF official, but only after expressing my grave concern about the security of the process. An NSF spokesperson emphasized the importance of the survey, providing comment from John Gawalt, Director of the National Center for Science and Engineering Statistics that “the data provided by the Survey of Earned Doctorates are invaluable for policy makers in government and for education, professional and research organizations to improve the doctoral education system.” When asked why the NSF relies on phone surveys using unknown (and easily forgeable) phone numbers, the agency responded that it uses multiple contact methods including mail, email and phone and that phone is a last resort. In my case, the agency said it was initially unable to locate a valid email or mailing address for me, but nearly a full month after its first phone contacts did eventually locate a mailing address and mail me a paper packet. When asked why it was unable to locate a mailing address, the agency responded that it uses the National Change of Address (NCOA) database, but was not able to explain the discrepancy. Given that myriad organizations were able to follow my transition from graduate school to Washington, DC using the NCOA database, it is unclear why NSF was unable to locate either a mailing address or email contact, especially given that the agency added that it also uses commercial personal information vendors to augment its contact databases. When asked why NSF collects information via phone survey, rather than using phone outreach solely to provide a formally recognized government phone number for the caller to call back to conduct the survey if they desire phone-based surveying, NSF offered only that it believed its current approach was secure. In particular, given the ease and routineness with which Caller ID numbers are forged today, at the very least one might assume NSF would use its calls only to direct users to call a US Government phone number that would offer assurances they were at least speaking with a government employee. When asked whether NSF was concerned that identity thieves and scammers could trivially monitor LinkedIn, university websites or a wide array of other data sources to compile lists of recent graduates and call them pretending to be NSF to ask the same or similar questions, the agency responded that the SED and its data collection process had been “reviewed by one or more Institutional review boards as well as the Office of Management and Budget” and that “these groups have determined that outbound calling is a legitimate method of survey data collection.” It is truly remarkable that in 2018 the US Government still believes that calling individuals from unknown non-governmental phone numbers and asking them a battery of questions regarding sensitive information of precisely the kind needed for identity theft is entirely ordinary and acceptable. Yet, it is the storage of the SED data that offers perhaps the biggest lesson for organizations in considering their data lifecycles. When asked about how SED data is secured from the moment of collection through the moment of destruction, NSF emphasized that the third-party contractor it hires to manage the SED data collection and storage complies with “Federal Information Processing Standards (FIPS) 199-compliant at the moderate risk level and meets all National Institute of Standards and Technology (NIST) confidentiality, integrity, and availability security standards for the moderate risk, allowing us to provide the appropriate level of security for the information, including personally identifiable information (PII).  Following NIST guidance, a Third Party Assessor Organization (3PAO) conducts an independent assessment of the FIPS environment at least every 3 years to verify compliance with NIST SP 800-53 rev. 4 control standards.” At first glance, the security measures in place to protect the SED data appear fairly secure: HTTPS is used for data entry and SSL is used to protect connections to and from the “backend SQL server” that stores the data. Data is temporarily stored on the web server during the entry process, but is deleted upon completion. Two factor authentication, audit logs and Citrix VDI is used to prevent data from being downloaded to external devices without being encrypted and the physical server hardware is stored in a secured on-premises data center protected by a card access system. DOD-compliant data wiping or physical destruction is used to sanitize retired hardware. Finally, “upon termination of the project, project staff reviews each project SQL server database to identify and delete any objects not required for permanent archival.” Both researchers and institutions can request access to selections of the SED data, in which case the data is encrypted for transit and must be accessed in a secured room on a computer system that is disconnected from the network during the period of time it is accessing the data. NSF clarified that any university requesting full SED records for its graduates are bound by the same restrictions as all other users and must access the records only on a non-network-connected computer in a secured room. From this rather lengthy list of safeguards, it certainly seems that the SED data are well secured from the moment of acquisition through final end user access. However, looking more closely, three lifecycle stages stand out: the storage of responses in the “SQL server," the temporary caching of results on the web server and the lack of specificity in the encryption algorithm used for shipping data to remote users. When asked to confirm that NSF’s contractor encrypts all data at rest, that the database is encrypted, that DOD-compliant data wiping applies to both the web server and database server drives and to clarify what encryption algorithm is used to secure data being shipped to remote users, NSF responded that “we cannot provide further information regarding our data security posture without potentially compromising the confidentiality of our survey data.” When asked whether it could at least confirm that it encrypts all data at rest, given that the major commercial cloud vendors all offer this information (Amazon, Google and Microsoft all use AES256) and thus obviously believe that confirming the use of AES256 at rest does not threaten their ability to secure customer data, NSF did not provide further comment. It is particularly remarkable that a federal agency would respond that it was unable to confirm that its contractor encrypts data at rest, unable to confirm that it uses database encryption and unable to comment on the level of encryption it uses to secure data in transit, because disclosing such information would “potentially compromise[e]” the safety of that data. If the major cloud vendors can disclose that they use AES256 encryption of data at rest and Google touts in its public documentation that it uses AES256 to secure data in physical transit for its Transfer Appliance, while the major database vendors all publicly list their available encryption algorithms, it seems safe to conclude that if NSF’s contractor used industry standard encryption to secure the SED, it could confirm this without unduly threatening the safety of that data. As the OPM breach taught us, loss of sensitive data isn’t always due to a rogue employee walking out the front door with a USB hard drive stuffed with secrets and physical security alone doesn’t protect the contents of a data center. As is always the case in cybersecurity, no matter how strong your castle walls, a single open window can nullify all that hard work. In the case of the SED survey, NSF’s refusal to confirm whether its contractor performs standard encryption or to provide the kind of non-enabling general assurances that commercial firms provide their customers is troubling at best. Confirming that AES256 is used to secure data at rest or encrypt its database is certainly not in the realm of classified intelligence that, if disclosed, would enable the immediate compromise and exfiltration of the SED database. On the contrary, it would reassure the nation’s recent doctoral graduates that their sensitive information is being secured with the same industry standard practices that the commercial data world adheres to. The refusal of NSF’s contractor to provide even such basic information suggests an adherence to the old “security through obscurity” motto that formerly guided (and unfortunately still guides) the cybersecurity of countless companies. It also raises questions about whether other standard best practices are adhered to regarding the security of the web and database servers. As Target learned the hard way, with poor firewall practice, a single breached HVAC system can access a company’s most sensitive servers. If NSF’s contractor does not encrypt data at rest, does not encrypt and secure its databases and perhaps uses a firewall configuration that whitelists the entire local network for root database access, then all the rest of its security posture, drive shredding and air gapped workstation access are for naught. A single compromised web server could happily ship a master copy of the database out the door before anyone is the wiser. Putting this all together, it is truly remarkable that in 2018 the US Government still considers it entirely acceptable to call individuals from unknown phone numbers at night and ask them a battery of queries that are an identity thief’s dream. That NSF’s contractor would refuse to adhere to standard industry practice and confirm the basic encryption and security practices it uses to secure its data, even while touting its physical security and audit compliance is itself concerning, especially the contractor’s insistence that confirming its use of encryption would place its data at undue threat of compromise. Such security through obscurity and focus on physical, rather than digital threat is a relic of a simpler and more trusting era of cybersecurity, with little place in today’s reality. In the end, NSF's survey reminds us that in 2018 the US Government still has a long way to go to modernize to the new reality of cybersecurity.
ff8664dcfe96c74c2a2e8ed2f98a2f2b
https://www.forbes.com/sites/kalevleetaru/2018/05/04/why-are-academics-upset-with-facebooks-new-privacy-rules/
Why Are Academics Upset With Facebook's New Privacy Rules?
Why Are Academics Upset With Facebook's New Privacy Rules? The silhouette of a man is seen typing on a laptop computer. (Daniel Acker/Bloomberg) Last month a group of leading academics signed an open letter condemning Facebook’s new privacy rules and API changes that greatly restrict the ability of outsiders to mass harvest data from the platform without the knowledge or consent of users. The letter reflects the view across a broad swath of the academic community that any increase in user privacy protections that prevents them from being able to harvest our personal private information without our knowledge or against our will and turn us all into digital lab rats is simply unacceptable. What does this tell us about the future of online privacy and whether the very academic community that is so quick to condemn Facebook’s own research is willing to apply the same standards to their own work? Lost in the cacophony of overhyped headlines each day touting the success stories of the “big data” revolution are the very real privacy and ethical consequences of a world in which we are all merely data to be bought, sold and studied. The academic world was once a wild west in which research ethics and the considerations of those being studied were secondary and pretty much any research activity was acceptable as long as it yielded useful results. In turn, as researchers engaged in ever more harmful conduct, societal views towards research shifted and an array of ethical protections were enacted in most countries to protect the rights of those being studied. This initial wave of medical safety protections eventually extended to non-medical fields like the social sciences, governing how the data gathered about ordinary people could be used in the course of academic research. In turn, the “big data” era has ushered in yet another transformation to research ethics, but this time the tsunami of easily accessible data has washed away decades of progress in how the academic world saw its ethical responsibilities towards those it studies. As I’ve documented over the last several years, the data era has profounded shifted how academics see privacy rights and that universities across the US and Canada have largely abandoned the traditional protections afforded human subjects. So long as the surveillance medium is data, whether mass harvested without consent or against users’ wills and against the terms of service of the platforms being accessed, purchased from brokers or exchanged with companies, anything goes and in the overwhelming majority of cases no ethical review is needed because the data is considered to fall under the “publicly accessible data” exemption. The private hushed conversation between two friends about a highly intimate medical issue would, in the past, have largely been off limits to academic inquiry. Few ethics boards would have approved wiring the local coffee shop or bar with hidden microphones and cameras to record the public’s private conversations and then publish studies using that data. On the other hand, mass scraping private social media conversations is typically entirely exempt from ethical review and that hushed medical discussion, when it occurs in a private social media chat, is fair game indeed. Even the rigorous protections afforded the more traditional human subject interview in which a researcher puts a person through an experiment, interview or other data collection process, have not been immune to the data revolution. While bringing a person into a university-owned laboratory for an interview still requires ethical oversight, mass recruiting millions of people to take the same survey via Amazon Mechanical Turk or other online services is typically exempted from ethical oversight. The medium really does matter and the translation of any traditional research process into something “digital” or “data” oriented magically waives the ethical protections that took decades to build up. When news of the most recent incarnation of the Cambridge Analytica story first broke, I noted that academics all over the world routinely harvest Facebook and other social media data without consent and that the lives of its two billion users are archived in a likely uncountable number of archives scattered across the planet, from university computing facilities to researchers’ personal laptops and as researchers move between campuses they typically take copies of all of this data with them, seeding countless new archives every day. Yet, it doesn’t have to be this way. Even as US and Canadian ethics boards exempt the overwhelming majority of social media research from ethical oversight and even waive their most absolute ethical regulations for high profile research, not all universities have been as cavalier about user privacy in the digital era. Last month it emerged that Cambridge University’s ethics board had actually rejected a proposed research project of Aleksandr Kogan’s that is remarkably similar to what eventually became the work at the heart of the Cambridge Analytica situation. In their rejection, the university’s ethics panel focused on the lack of informed consent and that simply because it was technically feasible to mass harvest data from Facebook didn’t mean that doing so was ethically proper. Of particular interest, the panel honed in on the question of mass harvesting friend data, given that even though the user was granting permission for their friends’ data to be harvested, those friends themselves had no knowledge their data would be used and no opportunity to grant informed consent or reject use of their data. Despite its otherwise mundane nature, Cambridge’s rejection letter is truly remarkable for the simple fact that in the US such a research project would never have actually been reviewed, since US ethics boards nearly unanimously consider such work to fall under public data exemptions. At the same time, it is telling that Cambridge’s letter is so unique, in that its view of research ethics has not transferred to the US. In contrast, the new Facebook academic research initiative, launched last month and created by leading US academics and overseen by one of the leading social science organizations, has focused all of its efforts to date on the myriad ways academics will be able to mass exploit the private intimate data of two billion users without any consideration of informed consent or the ability to opt-out. Every single decision relating to privacy, informed consent and users having control over their data? Those are all “to be determined” at a later date. Indeed, the word “consent” does not even make a single appearance in the project’s academic whitepaper, while “permission” appears just once as an acknowledgement of Facebook’s rights, not the rights of the two billion people being turned into digital lab rats. While Cambridge University rejected Kogan’s proposal on the grounds that informed consent is sacrosanct and overrides any potential research outcomes, back here in the US the academic community is building a privileged research program for which informed consent and ethics are secondary issues to be examined at a later date, once all of the amazing things that can be done with all of this private data have been exhausted. Putting this all together, there is something inherently wrong with a world in which academics condemn Facebook for conducting consent-free research on its users, only to turn around and condemn the company again when it tries to institute greater privacy protections that would prevent academics from doing the same, all while those very same academics partner with Facebook to create a new research initiative that entirely removes consent from the equation and where ethical considerations are unilaterally TBD, to be figured out after researchers decide what they want to do with two billion people’s private information. Cambridge University’s ethics panel gives us hope that there are still some institutions that believe in the ethical protections that took decades to build, only to fall like dominoes in the digital “big data” era. In the end, it is not just the social media giants and private companies rushing to commercialize our digital selves and stave off any discussion of privacy protections – the academic community is running right alongside helping to clear the way.
6bab9d71f17a9bf32fdc44f24e040871
https://www.forbes.com/sites/kalevleetaru/2018/11/24/television-and-geography-as-big-data-mapping-a-decade-of-television-news/
Television And Geography As Big Data: Mapping A Decade Of Television News
Television And Geography As Big Data: Mapping A Decade Of Television News Geography of BBC News 2017-2018 (blue) versus CNN, Fox News and MSNBC combined coverage 2009-2018... [+] (yellow/orange) Kalev Leetaru What happens when we begin to think of all information as data that can be explored to yield new insights into our world? What would it look like to take nearly a decade of CNN, Fox News, and MSNBC television broadcasts and two years of BBC News broadcasts and run them through sophisticated natural language processing algorithms to identify every mention of a location on earth in their coverage and then create a series of maps that visualize the places we hear about when we turn to the news? What would those maps look like and what might they tell us about what we see when we turn on our televisions each day? Half a decade ago I began working with the Internet Archive’s incredible Television News Archive to explore how powerful computer algorithms could allow us to “see” the news in entirely new ways. From simple longitudinal keyword searches to mass emotion mining to geographic mapping to the most powerful deep learning algorithms watching political ads, television has an incredible amount to teach us as we explore it through the modalities and lenses of massive data mining. Geography offers a particularly powerful and yet underutilized lens through which to understand the news. In fact, it was an animated map of 400,000 hours of television that was the very first major visualization I created from the Television News Archive. Over the years this has been followed with city and country-level aggregations and animations, each exploring a different dimension of television geography. As we approach nearly a decade of television news coverage monitored by the Internet Archive, it is worth looking back to see just what the world has looked like through the world of television news. Mapping television news starts with the raw textual closed captioning streams of each station. These captioning streams are essentially verbatim transcriptions of the spoken audio of each broadcast, allowing us to apply textual data mining tools to this visual medium. The textual captioning streams are fed through specialized algorithms known as “fulltext geocoders” that identify all mentions of geographic locations ranging from a country or city name on through a remote hilltop and uses the immediate context of the broadcast to disambiguate it and separate Paris, Illinois from Paris, France. The final result is a massive archive of latitude/longitude coordinates of the centroids of all of the locations mentioned in the television news broadcasts monitored by the Internet Archive over the past decade. It is important to understand that automated textual geocoding of raw closed captioning streams will necessarily incur a certain degree of error from the captioning and geocoding processes inherent in all fully automated data mining. Raw television captioning data is especially difficult to work with, filled with typographical errors and rapid-fire contextual changes and lacking capitalization, punctuation and refined grammatical structure, meaning geocoding algorithms have less high-confidence contextualizing features to help guide their selection and disambiguation processes. In short, mapping television news through closed caption geocoding will contain a certain level of error but offers a powerful glimpse into the geography of attention of television news over the past decade. Using this data created by my open data GDELT Project and a single line of SQL with Google’s BigQuery platform, visualized through Carto’s mapping platform, we can create a map of the geography of each television station over the entire period the Internet Archive has monitored it. The Archive’s BBC News archive spans just under two years, 2017-present. Immediately clear is its near-saturation coverage of the United Kingdom and heavy coverage of former colonies and countries of interest such as India, South Africa, Australia and Zimbabwe. Its coverage of the US is strong, but not exhaustive. Geography of BBC News coverage 2017-2018 Kalev Leetaru In contrast, CNN unsurprisingly has offered near saturation coverage of the United States over the past nine and a half years (2009-2018), with far less comprehensive, though still relatively strong, coverage of the UK. Afghanistan and Iraq also feature more prominently, likely due to the enormous US military investment in those countries over the time span. Geography of CNN coverage 2009-2018 Kalev Leetaru There appears to be little substantive difference between CNN’s geographic focus and that of Fox News and MSNBC over the period 2009-2018. Geography of Fox News coverage 2009-2018 Kalev Leetaru Geography of MSNBC coverage 2009-2018 Kalev Leetaru Overlaying all three US stations on one map (CNN in purple, Fox News in blue and MSNBC in green, with black indicating locations covered by all three), slight differences can be observed, but no obvious systematic differences are immediately apparent. Geography of CNN vs Fox News vs MSNBC coverage 2009-2018 (world) Kalev Leetaru Zooming into the United States, it is clear that all three stations cover the eastern half of the country far more extensively than the West and that Seattle, San Francisco and Los Angeles as well as Salt Lake City and Denver are the most heavily covered of the Western cities. There does not appear to be any difference in rural/urban coverage or North/South. Geography of CNN vs Fox News vs MSNBC coverage 2009-2018 (USA) Kalev Leetaru What about the difference between American and British news? The following map overlays the three American stations in yellow (orange/red indicates areas of heavy overlap among the American stations) and BBC News in blue. As suggested earlier, BBC is the clear winner in the UK and former British colonies, with CNN, Fox News and MSNBC the clear winners in the US and Middle East. The rest of the world is a fairly even mix. Surprisingly, American stations appear to have slightly better coverage in Europe, though the comparison is not quite fair given that nearly ten years of American news is being compared to just under two years of BBC News coverage. Geography of BBC News 2017-2018 versus CNN, Fox News and MSNBC combined coverage 2009-2018 Kalev Leetaru Of course, often the most interesting stories are told not through static snapshots, but rather through the patterns in how those points have moved over time. To explore this in more detail, two animations were created that show the day-by-day geographic focus of the stations. The first shows the BBC’s daily coverage 2017-2018 and the second shows the combined daily focus of CNN, Fox News and MSNBC. Each frame shows all of the locations mentioned on those stations on that day. Watch closely and you can see a wealth of world events result in bursts and waves moving across the maps. Just as quickly, watch how fast the media lose interest in each story and focus their attention elsewhere. Putting this all together, when we begin to think of television as data and geography as a lens through which to explore it, we are able to “see” the news in an entirely new light. Most importantly, we able to quantify just how little of the world we actually hear about each day and the importance of editorial decisions and agenda setting in our understanding of the world around us. In the end, perhaps the most important story of all is that by bringing together powerful data mining algorithms with the analytic capabilities of BigQuery and the visualization power of platforms like Carto and applying those to archives as incredible and unique as the Internet Archive’s Television News Archive, we now have the tools to explore our world in ways we could never before dream of and to see the world around us in a whole new light. I would like to thank Google for the use of Google Cloud resources including BigQuery and to Carto for the use of their online mapping platform. I would also like to extend a very special thanks to the Internet Archive’s Television News Archive and the team behind it for creating such an incredible resource, without which these explorations would simply be impossible.
5459bb63ba2f4a71d7ecd3d8fd073088
https://www.forbes.com/sites/kalevleetaru/2018/12/10/the-ai-revolution-that-was-and-wasnt-in-2018/
The AI Revolution That Was And Wasn't In 2018
The AI Revolution That Was And Wasn't In 2018 Looking back on 2018, this has been a year in which AI has continued its meteoric rise over the digital landscape, infusing its magical powers into almost every corner of every industry and revolutionizing how society uses data. Or so one might be forgiven for thinking this year as companies big and small have rushed to demonstrate how they are harnessing deep learning to upend their business processes. The reality is that while AI has truly transformed areas like audiovisual recognition, given us powerful new tools for understanding language and offered a first glimpse at algorithms that possess glimmers of intuition, the mundane reality of the overwhelming majority of commercial AI applications to date have frequently offered little improvement over the traditional approaches they replaced if those systems had been built properly to begin with. We speak today of deep learning in reverent tones and ascribe to it an almost mythical aura of superhuman capability. Companies rush to sprinkle the magical AI dust on every project. Even normally austere and risk adverse industries have been plunging headfirst into the AI world with reckless abandon, throwing deep learning models at every problem. The same funding agencies that once required the phrase “social media” in every successful proposal now require “deep learning” somewhere in the abstract to even consider funding a project, whether or not AI has even the slightest applicability to the problem at hand. In the public consciousness and increasingly in the C-suite, AI is described as human-like algorithms that are basically childlike versions of ourselves that are improving by the day and that any limitations in their accuracy can be instantly fixed by just handing them a bit more training data. The reality, of course, is that today’s deep learning algorithms are more art than science. Accuracy gains come not from simply blindly throwing more training data at an algorithm, but from careful hand selection of training data, intricate tuning, experimentation and often dumb luck. Successful algorithms are enigmas that even their own creators don’t fully understand and can’t automatically replicate in other domains. Even the most accurate models are frequently so brittle that the slightest change or malicious intervention can send them wildly off course. Far from primitive silicon humans with childlike minds, today’s AI systems are nothing more than basic statistical encapsulations, more powerful and capable than past approaches, but little different from what we’ve been doing since the dawn of computing. In some areas like audiovisual analysis, deep learning approaches have been genuinely transformative, allowing machines to achieve accuracy levels at understanding and generating images, speech and video that weren’t even imaginable several years ago. Neural vision systems can recognize a specific make and model of vehicle, even when it is driving through the dessert and covered with armor, weapons, flags and soldiers. It can understand the difference between a gun sitting on a table, a gun pointed in the air and a gun pointed at a person. It can estimate the geographic location the photo was taken, even if it looks dramatically different from the training images it saw. It can also create new imagery or speech that is eerily humanlike. This is where the true applied AI revolution has occurred, in opening new modalities to machine understanding. At the same time, using AI for more mundane textual and numeric analyses has not always shown quite the level of transformative improvement. Much like the statistical machine translation (SMT) it replaces, neural machine translation (NMT) can achieve human-like levels of fluency in good cases but fails just as miserably and comedically in others. While NMT systems can indeed achieve higher BLEU scores in academic competitions, when applied to routine day-to-day real-world content, the gains are not necessarily as noticeable as they become lost in the gibberish errors that confound fluent understanding. The problem is that NMT is still at the end of the day merely blindly applying the statistical patterns it has learned from seeing huge volumes of training data, just like its SMT predecessor. An NMT system can only apply learned patterns to transform one set of tokens into another, like a child mimicking an artist by putting colors and shapes in the same general positions without understanding what they are trying to draw. Unlike a human translator, neural models of today do not actually understand the deeper meaning of the concepts and ideas they are reading, they merely recognize patterns of tokens much like SMT approaches. NMT systems are considerably superior in their ability to recognize far more complex patterns, perform much more sophisticated reorderings and operate across a much greater window of text, but even NMT systems still primarily operate at the level of a sentence or small block of text in isolation. We are still a long way from having production NMT systems that can read an entire passage of text, distill it down to the abstract ideas and perspectives it discusses and then render it into another language entirely from that abstract idea-based representation, bringing contextual and world knowledge to bear in disambiguation, contextualization and framing. Moreover, the lack of training data for most languages means that even the most cutting edge NMT systems still fail just as comedically as SMT systems for many languages or suffer from the same issues of fluent passages being interrupted at regular intervals by gibberish that renders their key arguments undecipherable. Neural text processing as a whole suffers from a fixation of process over outcome. Companies believe deep learning solutions will outperform any other solution and so focus on finding a deep learning solution at all costs, rather than recognizing that not every problem is well suited for current neural approaches. I’ve seen far too many companies build deep learning solutions for the most basic of tasks like recognizing mentions of a specific person’s or company’s full name. When asked whether the massive and expensive deep learning model outperformed a simple keyword search for the name and a few variants, the answer is all too often that they never actually tried, they just assumed neural was the way to go. Eventual benchmarking, if it is performed at all, often shows that the neural approach was actually less accurate in that it was far too sensitive to typos and grammatical errors in the text and lacked sufficient training data to pick up most edge cases. Neural entity recognition, classification, geocoding and sentiment analysis are all areas where even the most cutting-edge algorithms frequently struggle to outperform well written classical approaches. The key is that few commercial deployments are well written. Most are hastily thrown together haphazard assortments of hand crafted rules or data-starved Bayesian models. Indeed, it is the rare classical algorithm that has been built from the domain down rather than the code up. Sentiment algorithms in particular have fixated on naïve simple-to-code algorithms built by programmers, rather than stepping back and working with psychologists and linguists to understand how humans communicate emotion and building tools to capture those real world complexities and nuances. In such cases neural approaches can help standardize model creation and coerce it into more robust data practices, but the benefits often come primarily from the change of creation workflow rather than the power of the neural approach itself. Indeed, for many companies I’ve spoken with, the greatest benefits of deep learning approaches come not from the capabilities of neural networks, but rather from the standardized data-centric creation process enforced by current model construction workflows. In my own experiences over the deep learning revolution of the past half-decade, applying nearly every imaginable machine understanding task to textual and audiovisual news content in more than 100 languages, I’ve come to alpha test an incredible diversity of approaches from neural to classical machine learning to hand crafted expert rules to every combination therein. I’ve tested everything from production commercial applications to bleeding edge research experiments, with the results always being the same: neural approaches offer massive accuracy and capability leaps for audiovisual content and select understanding and creation tasks, but their application to routine textual understanding can frequently be replicated or exceeded with well-designed non-neural solutions using far less training data and with far greater robustness. The issue is that while truly capable deep learning experts are an extremely rare commodity, the total pool of data scientists able to step back and build robust systems that reflect the data and contexts they are used in is even smaller. In short, neural approaches bring considerable benefit to many companies not because of the use of deep learning, but rather because their classical data science workflows were so poor, focused on algorithms over outcomes. Perhaps the biggest challenge today is the enormous gulf between the pioneering work of AI research groups like Alphabet’s DeepMind that are building tools that can learn to play video games and showing the first glimmers of intuition, compared with the rote deep learning systems being built in the commercial sector. Enabling machines to reason about the world, communicate and understand with the outside world, learn new tasks rapidly, abstract from examples to higher order representations and even create on their own, are all incredible capabilities that deep learning approaches are uniquely suited for. At the same time, these are a far cry from the rote categorization filters and entity extractors that form much of the commercial sector’s deep learning utilization. Putting this all together, the mythology of AI today is more myth and marketing hype than reality. Companies rush to deploy AI anywhere and everywhere to lay claim to having an “AI-powered business” but these neural deployments aren’t always any more accurate than the classical systems they replace. In many cases they are actually worse. Neural approaches have truly transformed audiovisual understanding, but when it comes to textual understanding, neural approaches do not always represent a major leap forward. This may change as the pioneering applications of deep learning eventually graduate from the research labs of places like DeepMind and into the production commercial world, but for now companies would do well to stop and ask whether deep learning is really the answer to any given problem, conduct extensive benchmarking to test that conclusion and, most importantly, rethink how they create software systems in the first place and what happens when the creativity and rigor put into neural approaches is brought to bear on more traditional data science workflows.
f0bea10015cd6507484a94baf06ccc6a
https://www.forbes.com/sites/kalevleetaru/2019/03/09/the-shift-from-open-source-to-commercial-data-analytics-is-placing-cost-over-accuracy/
The Shift From Open Source To Commercial Data Analytics Is Placing Cost Over Accuracy
The Shift From Open Source To Commercial Data Analytics Is Placing Cost Over Accuracy The era of “big data” has been marked by a cataclysmic break from statistics. With the loss of the denominator across much of modern data science and a growing departure from the idea that the quality of our data influences the accuracy and representativeness of our results, we seem to have entered a "post-statistics" era of big data. One of the key driving forces behind this transition has been the shift from open source tools and open data to opaque datasets processed through black box algorithms that make reproducibility and accuracy assessments impossible. In an era in which we no longer seem to care about the accuracy of our results, what does the future of data science hold in an increasingly proprietary world? The modern era of data science was once built upon an open and transparent world of open source software and open data, wielded by statistically literate technical experts who deeply understood both the tools and data they were using. Every algorithm was cited back to its complete description in the academic literature and supported by myriad public case studies. Every implementation was in the form of open source software that could be inspected and improved. A focus on mathematical accuracy over marketing hyperbole meant tools were typically upfront about their biases and limitations, backed by a large published archive of case studies across disciplines and datasets. Implementers themselves often hailed from the sciences with strong algorithmic and numerical methods backgrounds, ensuring a rigorous focus on accuracy and completeness. In contrast, as data science has become ever more commercialized, that transparency has given way to the opacity of the enterprise world. Tools are closed source, algorithms are proprietary, technical documentation is scarce and implementers are frequently enterprise developers lacking the traditional numerical methods backgrounds and relentless focus on accuracy and completeness that define the world of scientific code. The scientific codes that once defined the data science space are typically designed for expert use, filled with knobs and dials to adjust every available parameter of the underlying algorithms with absolute precision. All that complexity requires algorithmic, statistical and technical understanding that fewer and fewer data scientists possess. Moreover, our lack of understanding of the commercial datasets that increasingly define the big data era means even analysts with deep statistical backgrounds lack the necessary insights into their data to be able to appropriately tune their algorithms. The scientific world’s emphasis on correctness means the computational cost of analyses is typically secondary to ensuring the accuracy and completeness of their results. In contrast, cost rules supreme in the commercial world, creating strong incentives to adopt damaging optimizations like aggressive sampling and reduced numerical precision or dangerous implementation shortcuts that can invalidate results. In the deep learning space many of these tradeoffs are exposed to developers, but those without a background in the underlying mathematics may not fully understand the ramifications of their decisions when opting to prioritize speed over accuracy to reduce the size and execution time of their models. Data analytics is increasingly becoming a turnkey “point and click” affair, where all of the complexity and nuance of the underlying algorithms are hidden from the user. A sentiment analysis might extrapolate its results from a 1% random sample of the data without every letting on to the analyst that the results are based on anything other than a population assessment. Results have become about “good enough” rather than “correct” and "complete." To those entering the world of data science from a traditional HPC background, building scientific codes running on traditional supercomputers where even the underlying hardware circuitry is known and accounted for, the opaque “trust us” and “good enough” leap of faith of the commercial data science world can be jarring. Reproducibility is also far more difficult in the enterprise world. Many commercial analytics companies are constantly improving their algorithms, meaning the same analysis run a few days later may yield wildly different results. Even when using the exact same data and parameters, results may not be repeatable, making it impossible to know if the original analysis was incorrect or whether the analytics vendor simply changed their algorithm without notice. Analytics companies with built-in datasets, like social media analysis platforms, often fail to reprocess their historical data when making algorithmic changes. A number of major commercial social analytics platforms routinely make breaking changes to their core algorithms without updating their historical data, resulting in longitudinal analyses whose findings are nearly entirely algorithmic artifacts rather than genuine patterns in the underlying data. Closed platforms analyzing closed datasets also make it far too easy for bad science to flourish when their results can never be verified or externally scrutinized for mistakes or malfeasance. It doesn’t have to be this way. Some analytics platforms, especially those of the major cloud vendors, differentiate themselves through their focus on the accuracy and completeness of traditional scientific workflows. Many of these platforms are essentially built as software interfaces to the vendor’s hardware rental business, where the focus is on providing maximal accuracy tools to process the customer’s own data, building a steady stream of hardware rental. These platforms offer a hybrid between the transparency of the scientific world and the opacity of the commercial world. While their underlying source code may be proprietary, the platforms themselves typically implement well-known algorithms with detailed technical documentation of their specific implementations and restore full control over all of the algorithm’s configuration options. Some even open source the algorithmic portions of their platforms for maximal transparency or make the entire toolkit open source, with the benefit being its optimization for their specific cloud offerings. Putting this all together, as data science matures as a field, we need to far more carefully balance the convenient opacity of turnkey analytics platforms with the more complex transparency of the scientific world. Some analytics platforms have managed to blend these two competing demands quite well, but much of the “big data” world, especially social media analytics, remains in the shadows.
23ab90fe2dccea321a22c60b5303bcfd
https://www.forbes.com/sites/kalevleetaru/2019/04/23/a-fading-twitter-changes-its-user-metrics-once-again/?sh=59b841187a31
A Fading Twitter Changes Its User Metrics Once Again
A Fading Twitter Changes Its User Metrics Once Again Twitter logo. (Photo by Jaap Arriens/NurPhoto via Getty Images) Getty Twitter is fading. Fast. In the last six years it has dropped by 100 million tweeting users per day, plummeting from an average of 350 million posting users to just 250 million, while its total tweet volume has dropped from 500 million tweets to just 300 million tweets. Its remaining user accounts are aging steadily, retweets are up to almost 50% of all tweets and verified tweets and their retweets alone account for 10% of total volume. In short, Twitter has been in a six-year decline and stagnation it can’t seem to shake. The company’s solution? To pivot to yet another definition of what counts as a “user” in a desperate attempt to paint its decline in a more positive light. Estimated total daily volume of tweets projected from Twitter’s 1% stream Kalev Leetaru In the last seven years, Twitter has experienced a slow steady collapse in the popularity of its service. It is down more than 100 million daily tweeting users and down almost 200 million daily tweets while its user base is stagnating and it is transitioning from a content platform to a behavioral platform. Estimated distinct Twitter users sending one or more tweets per day projected from Twitter’s 1%... [+] stream Kalev Leetaru The company’s answer to this steady decline has been to rebrand its growth, redefining what it counts as a “user” to something that paints a slightly rosier picture of the company’s future prospects. Twitter’s latest user metric is to count “monetizable daily active users (mDAU)” which count anyone who consumes a tweet in a way that the company can show them ads. Percent of all unique users in Twitter 1% stream each day that sent at least one retweet Kalev Leetaru The problem with this metric is that users come to Twitter for its content. Whether they themselves are posting content or whether they come to bask in the wisdom or vitriol of others, Twitter’s sole attraction are its tweets. If there were no tweets, there would be no reason to visit. This means that Twitter’s steadily declining post volume is an existential threat to its very future. Average "age" in days of tweeting user accounts in Twitter's 1% stream Kalev Leetaru By convincing the public and the financial markets to view its success through the number of users it can show ads to, the company can sidestep the issue of how soon its declining producer base will begin impacting its consumer interest. Much like a television set without shows to watch, Twitter the platform is merely an empty box without all those users pouring in new tweets each day. Put another way, posts can be thought of as Twitter’s fuel and monetizable users as the number of people it can cram into its bus driving down the information superhighway. Twitter has refocusing the public on the number of passengers it can cram into that bus even as its fuel tank is running dry. Once the fuel hits empty it doesn’t matter how many users are on that bus, they are going to scatter to the winds if there is nothing to draw them in. In truth, the most important number to Twitter's stockholders is not how many users the platform can show ads to, but how much content is flowing across its digital borders from how many genuine users. If a platform is growing rapidly with hoards of new users signing on every day and those users are pouring out new content with a high degree of loyalty and repetitiveness, there are many options for monetizing that environment. If a platform is shrinking rapidly, it doesn’t matter how many users it is currently able to show advertisements to, the platform is fading. This raises the question of why the company doesn’t provide greater transparency to its investors and the general public. Once upon a time Twitter actually used to provide a wealth of almost monthly statistics about its growth, but as its user base leveled off and then began to fade, the company stopped publishing most metrics. Today the company steadfastly declines to provide the kinds of metrics that would permit actual evaluation of its health, from the number of daily tweets to the number of daily tweeting users to its retweet density and average account age. In fact, last month’s look back at Twitter’s evolution over the last seven years offers one of the first detailed glimpses in years at these critical statistics. Asked last month to comment on these statistics, a spokesperson responded only with “not commenting.” Similarly, asked again earlier today why Twitter does not provide these metrics to investors to provide greater clarity on the health of its company and why it feels monetizable users are a better indicator, the company did not respond. Putting this all together, Silicon Valley has mastered the art of redefining how we think about corporate growth. The great success of its vaunted reality distortion field comes in no small part from its ability to convince the public and its investors to continually pivot to a never-ending parade of new metrics as slowing growth makes its old measures a liability. In the end, perhaps the biggest question of all is why Wall Street doesn’t demand that the Valley finally provide us with transparency about just how big or small our social platforms really are and allow us to finally objectively measure not only their financial health but their impact on society itself.
43e49883d711959140b8e3d240841144
https://www.forbes.com/sites/kalevleetaru/2019/05/24/whatsapps-massive-security-flaw-serves-to-remind-us-the-limits-of-consumer-encryption-apps/
WhatsApp's Massive Security Flaw Serves To Remind Us The Limits Of Consumer Encryption Apps
WhatsApp's Massive Security Flaw Serves To Remind Us The Limits Of Consumer Encryption Apps WhatsApp logo. Getty Facebook acknowledged last week a massive security vulnerability in its WhatsApp messaging software that allowed a commercial spyware company to install surveillance software on victims’ phones merely by calling them. Exploiting a standard buffer overflow vulnerability in WhatsApp’s call answering stack, the security issue was particularly devastating, allowing arbitrary remote code execution. While the vulnerability itself was quickly fixed, its existence in Facebook’s marquee encrypted communications application reminds us that despite all of their marketing hype, consumer grade encrypted messaging apps are not necessarily as safe as the public might expect them to be. The vulnerability afflicting WhatsApp was as mundane and common as they get in the cyber world: a simple buffer overflow exploit. Its location in the software’s call answering stack, however, made it particularly devastating, meaning victims could be infected simply by having a malicious actor know their phone number, even if they didn’t actually pick up the call. Worse, after infecting the user’s device, the malware could erase all traces of the user even having received an unusual call. While confirming the attack, Facebook offered few other details other than to recommend that users upgrade to the patched version of the client application immediately. Given that samples were captured of at least some of the spyware variants that were known to be installed on victims’ phones, this raises the question of whether Facebook would be making available a malware removal tool that would scan users’ devices for the known malware. While this would remove only the previously identified spyware tools, it would at least offer users some peace of mind. Asked whether the company would be distributing such a malware scanning tool as an option for concerned users, the company confirmed that it would not. Asked how users themselves might be able to determine whether they had been affected, especially those in high-risk communities, the company confirmed that there was no straightforward way to determine whether they had been compromised and that Facebook would not be providing any assistance to WhatsApp users to determine this. Facebook’s refusal to help its users is far from usual. Most consumer software includes legal clauses expressly disavowing any responsibility for damage to the user's device and few companies are willing to step forward to help users recover from a cyber incident without charging substantial fees. Yet the biggest story is not that WhatsApp had a buffer overflow vulnerability or that a malicious actor actively exploited that vulnerability to install spyware on users’ devices. The real story is that this incident reminds us that consumer grade encrypted communications software is far from the hardened military protection that the general public often associate with them given the companies’ own marketing campaigns. Facebook has relentlessly touted WhatsApp as a security-first communications platform that offers “secure messaging” for “your most personal moments.” The company’s marketing literature heavily emphasizes WhatsApp’s security features, touting its “secure” design and even recommending it for use by “airlines, e-commerce sites and banks,” creating the impression of a highly secured enterprise application. Nowhere on the main pages of WhatsApp’s website is there a big bold disclaimer that it is a consumer application that should not be used for sensitive communications. In fact, quite the opposite unless one wades through the lengthy legalese of its terms of service document. To the general public, WhatsApp might seem the perfect way to secure all of their communications. After all, if their encrypted web browser is safe enough to manage their bank account, an end-to-end encrypted messaging app touted as “secure messaging … [for] your most personal moments” and built by one of Silicon Valley’s biggest internet companies must surely be as secure as they get. The reality is that WhatsApp is still a consumer grade application. While any software may have vulnerabilities, the kinds of security reviews and rigorous testing that help ensure the security of military communications systems are simply not investments that companies are willing to make for free consumer software like WhatsApp. This is not to say that WhatsApp is any less secure than any other encrypted messaging app, but rather that companies like Facebook need to be more upfront with their users to help them understand that these are still only consumer grade applications. Of course, the past year’s parade of security breaches has shone a harsh light on Facebook’s relatively lax approach to the security of its products as a whole and a lack of rigor in its auditing and security review practices. Asked how Facebook would respond to concerns that perhaps it has overhyped the “secure” nature of WhatsApp to the public and that it is not sufficiently investing in the security of its products, the company emphasized that it had corrected the vulnerability in question but did not comment directly on whether it agreed that there could be a mismatch between the company’s portrayal of WhatsApp’s security and the reality of it being a consumer product. Putting this all together, last week’s WhatsApp story reminds us once again that despite the plethora of encrypted and “secure” messaging applications available today, the majority are still consumer grade products that lack the same rigorous design and testing as the kind of military-grade software that consumers could be mistaken for thinking of them as given the marketing that surrounds them. In the end, perhaps the WhatsApp breach might serve as a lesson to companies to be more forthcoming about the limitations of their software and to do more to help consumers recover from breaches. Unfortunately, that is unlikely to happen anytime soon.
92b071f3c2cdb65649802cc298f765b6
https://www.forbes.com/sites/kalevleetaru/2019/06/18/much-of-our-government-digital-surveillance-is-outsourced-to-private-companies/
Much Of Our Government Digital Surveillance Is Outsourced To Private Companies
Much Of Our Government Digital Surveillance Is Outsourced To Private Companies The recent Customers and Border Protection subcontractor breach reminds us just how much of the modern digital surveillance state is actually outsourced to private companies. From acquiring and managing the vast datasets recording our daily lives to providing analytic software and services on top of that data, much of what we think of as government surveillance is actually performed by private companies on its behalf, often with far fewer privacy safeguards or cyber protections. While Hollywood typically portrays government intelligence agencies as all-powerful entities exclusively relying on government lifers, the reality is that modern digital intelligence collection relies heavily on private companies. The datasets of greatest interest to intelligence agencies are no longer government-owned or produced. They are created and owned by private companies and must be purchased, hacked or legally compelled. Look closely at the Edward Snowden disclosures and a great deal of the NSA’s global monitoring intake originates within the data centers and telecommunications networks of the world’s private corporations. Yet rather than exclusively use federal employees to acquire this content, the government relies heavily on outsourcing its collection efforts to federal contractors. These can range from quasi-employees that sit side-by-side with government employees at desks in government buildings on through staffers working in remote modern state-of-the-art office buildings compared with their colleagues in buildings that can often resemble prisons. Most importantly, these contractors are frequently bound by different rules than their federal colleagues when it comes to digital acquisition. For example, federal agencies that are more heavily restricted in their lawful ability to collect social media, data broker files and other open sources have historically turned to private companies to conduct that surveillance on their behalf, legally laundering the results. As social media companies have increasingly passed new policies prohibiting the use of their data feeds by the intelligence community, the use of such contractors to launder surveillance needs, including geographic profiling, has only increased. All of this collected data must be stored somewhere. While federally-owned data centers are frequently the canonical repository, those data centers may be managed by contractors working for private companies. More often, unclassified collection like social media streams are archived directly by contractors on their own systems. Once collected, the data must be analyzed. The bespoke analytic software environments used by intelligence agencies are almost exclusively built by contractors who increasingly lease that software via subscription, rather than transfer perpetual rights to the government. In order to fine tune that software, contractors are frequently given direct access to surveillance data collected by the government and other contractors to improve their algorithms or train new deep learning models. A private US company might thus be granted access to an EU citizen’s private data collected by another contractor in order to build a deep learning model to better flag a certain kind of suspicious activity and then have the right to resell that model to other law enforcement and allied governments, including that EU citizen’s own government, which might otherwise face restrictions in using its citizens’ data to build surveillance deep learning models. Increasingly, at least when it comes to digital data streams like social media, the entire process, from initial data acquisition to final analytic outputs, is overseen by private companies with large portions of the analytic pipeline occurring within their own data centers with little oversight by the federal government. In fact, historically the government largely outsourced the collection and analysis pipelines of social media streams like Twitter entirely to commercial social analytics companies. The dangers as government increasingly outsources digital surveillance to private companies is that those companies may not have the same cyber investments as the government enforces at its data centers. Even if data is properly secured, these private companies are frequently granted the right to resell their software and services to others, having improved them by incorporating lessons and even data from government surveillance, such as through their machine learning models. Putting this all together, the increasing outsourcing of the nation’s digital surveillance to private companies creates newfound cyber and privacy risks. Most importantly, it increasingly commercializes the surveillance state, blending the monetization and manipulation of the digital sphere with the kinetically-enforced surveillance of the physical sphere. The future is looking ever more like 1984.
0daafd043ba88fc249915dd9efa40037
https://www.forbes.com/sites/kalevleetaru/2019/07/08/todays-deep-learning-is-like-magic-in-all-the-wrong-ways/?sh=2bbb76e28fc9
Today's Deep Learning Is Like Magic - In All The Wrong Ways
Today's Deep Learning Is Like Magic - In All The Wrong Ways At the center of all magic is illusion. From the simplest card trick to the grandest stage production, magic is at its heart about creating the circumstances under which audiences can suspend their disbelief and ascribe their imaginations and dreams to the events they are witnessing. The physical reality of magic is far more mundane, combining an assembly line of developers creating new tricks and a logistical and artistic chaining of those discrete tricks into complex shows that become more than the sum of their parts. Under perfect circumstances, these sequences of tricks combine to create “magic” but can quickly come crashing down at the slightest issue or if viewed from anything other than the perfect angle. The world of deep learning shares much in common. Much like the world of magic, deep learning today is largely defined by practitioners churning out a steady stream of limited one-trick algorithms that are then chained together into complex sequences by developers to solve problems. Under perfect circumstances and fed ideal input data that closely matches its original training data, the resulting solutions are nothing short of magic, allowing their users to suspend disbelief and imagine for a moment that an intelligent silicon being is behind their results. Yet the slightest change of even a single pixel can throw it all into chaos, resulting in absolute gibberish or even life-threatening outcomes. The limitations of today’s correlative deep learning mean that algorithms are typically extremely narrowly focused, designed to perform a single small task. Practical solutions are formed by grouping these algorithms together into pipelines, not that dissimilar to how code has been written since the beginning of time. The difference is that as we group this code together we stop thinking of it as code and algorithms and instead anthropomorphize it into an intelligent, though primitive, being that “understands” and “learns.” In turn, this way of thinking leads us to be less cautious in thinking about the limitations of our code, subconsciously assuming that someone it will “learn” its way around those limitations on its own without us needing to curate its training data or tweak its algorithms ourselves. The public doesn’t see any of these limitations. Instead, they see a carefully choreographed magic show held under perfect circumstances with ideal input data nearly identical to its training examples. Maintaining this illusion is the fact that only deep learning’s successes are publicized. Its myriad failures are never seen by the public. Companies like Facebook refuse to report the false positive rates of their deep learning algorithms and steadfastly refuse to permit any form of external review that might shed even the slightest light into how they actually perform in real life. The carefully chosen statistics published in their marketing materials and public relations releases offer no insights into how those algorithms perform in practice rather than under ideal circumstances. The end result is that like a good magic show, the world of deep learning the public sees is little more than a stage-managed illusion. Collections of simplistic tricks are presented under ideal circumstances in order to convince the public of an AI revolution that exists only in their imaginations. Driverless car companies sell vehicles upon the idea that in a few short years those cars will be autonomously outperforming the best human drivers, even while those companies recognize internally that today’s technology cannot offer such a world. We freely buy into this illusion of an AI revolution because we want to believe that the intelligent machines that decades of science fiction have foretold have finally arrived. Rather than buy into this magic show, the public should demand to peek behind the curtain to see just how bad today’s deep learning algorithms really are. As algorithms increasingly moderate our speech, decide what we see and influence everything from our court systems to who can rent what apartment, the public should have a right to see the reality of the cheap parlor tricks behind the magic show. Perhaps if the public truly understood just how bad today’s deep learning systems are, companies might face greater pressure to improve their systems and circumscribe their failure points. Putting this altogether, much like a good magic show, the wonderment and success of today’s deep learning systems is based on carefully stage-managed illusion and convincing the public to ascribe their wildest imaginations of silicon intelligences to mere mundane piles of code that do little more than record simple lists of statistical correlations. In the end, as deep learning systems increasingly shape our lives, the public should have the right to draw back the curtain and see the non-magical messy reality behind the curtain that increasingly controls our world.
964b55079498ba23fddbbf0183e00f99
https://www.forbes.com/sites/kalevleetaru/2019/07/09/we-need-better-e-commerce-search-engines/
We Need Better E-Commerce Search Engines
We Need Better E-Commerce Search Engines The Web has fundamentally transformed how we shop. In place of the shopping mall and myriad small retail stores of yesteryear, we patronize massive digital megastores that stock almost every imaginable object, most of them available to be shipped to our doorstep in just a day or two, no matter how massive or heavy. Yet as e-commerce stores have branched out to incorporate nearly every kind of object sold today, their generic keyword-based search engines have failed to keep pace, creating a maddening search experience that limits the ability of customers to find what they need, increasing friction and decreasing sales. Today’s e-commerce sites offer pretty much anything imaginable for sale. From houses and cars to books and movies and everything in between, almost anything can be bought with a mouse click. The problem lies in actually finding the thing one wants to buy. Despite modern e-commerce sites leveraging unimaginable technology behind the scenes to learn their users’ preferences and surface recommendations to them, their search engines have evolved little from the primitive keyword queries of the Web’s earliest days. Even while Web search engines like Google can anticipate exactly what users mean with each query, most e-commerce engines perform literal keyword TFIDF matches that are wildly wrong more often than they are right. Search for a brand name camera and get back SEO-optimized pages for t-shirts that managed to work that brand name in somehow. Try to find a bestselling book and find yourself at a page for an entirely unrelated work that once again managed to manipulate its keywords just enough to insert itself above the legitimate entry. All of the keyword SEO manipulation of the early Web has come roaring back in the world of e-commerce. Yet it isn’t just keyword search that limits online marketplaces. An even greater problem is that as e-commerce sites have broadened to include so many different products, their filter options have failed to keep up. Searching for an espresso colored wood bookcase three feet wide and six feet tall with 5 shelves? The first page of results on one major e-commerce site are almost comical: a two-shelf two-foot-tall end table, four three-foot-tall storage racks, a five-foot-tall white plastic bookcase and a pair of tan slippers. Search for the more generic keyword “bookcase” and filter options include brand name and shipping speed, but filtering by the most important options like the number of shelves and color is available only for a handful of keywords. Height and the weight each shelf can support is missing entirely. Similarly, a search for a new sump pump yields shipping and brand filters, but the options actually relevant to sump pumps like power and pumping rate and whether it supports a backup battery. Despite an increasing number of e-commerce sites doubling down on fashion, clothes searches are nothing short of a disaster. The problem is that by trying to be everything to everyone, e-commerce sites have focused on providing a handful of generic search options rather than providing the specialized search and distinct filter options necessary to each genre. A bookcase search likely revolves around the dimensions, color, number of shelves and weight support. An industrial appliance search likely revolves around the specific capabilities that kind of device is designed to support. Clothes searches typically revolve around fabric, dimensions, cut and appearance. In short, there is no single generic set of search or filter criteria that will work for all products in existence. The end result is a cumbersome and exhausting search experience that leads customers to depart for generalized Web search engines to find the products they need, then link from there back to the e-commerce site in order to place their purchase. Putting this all together, the expansion and product generalization of e-commerce sites has led to search interfaces unable to cope with the sheer variety of products sold and the unique characteristics used to filter them. Generic TFIDF keyword search engines and catch-all filters like sorting by brand name just cannot compare to the kind of personalized and intelligent search interfaces users have become accustomed to on the modern Web. In the end, the modern e-commerce experience is a reminder of just how far Web search engines have come from their early days and how far the rest of the Web world has to catch up.
cd70d319f4cf8fb530302f42dd3b7e0e
https://www.forbes.com/sites/kandywong/2014/05/19/to-many-chinese-cars-like-teslas-model-s-appear-to-be-unmanageable/
"To Many Chinese", Cars Like Tesla's Model S Appear To Be Unmanageable
"To Many Chinese", Cars Like Tesla's Model S Appear To Be Unmanageable Elon Musk's announcement of a deal with Hanergy to build supercharging stations in China may have temporarily drawn attention away from Tesla's lack of progress in setting up an adequate network of charging stations. But it appears that he may be overlooking another important issue:  What is the attitude of ordinary Chinese car owners toward pure electric cars? In order to drive up the popularity of the Model S, Musk and his sales team have thus far been targeting the rich to be Tesla's first batch of customers. And selling their cars to these early adopters with deep pockets who want to be the first to show off their hi-tech toys looks like a smart move. But eventually the mass market should be the key objective, and practical concerns appear to weigh heavily on the minds of many Chinese drivers. It seems that a great deal of work still needs to be done before many Chinese families would commit to buying a new technology car. Putting aside Musk’s aspiration to put new technology into practice, perhaps some sort of marketing research to test the trend could be a useful indicator. Last Friday, the China Association of Automobile Manufacturers issued a press release announcing the launch of a new energy car promotion center in Beijing. The mission of the center is to exhibit the history of pure electric cars and explain the country's preferential policies, as well as the distribution of recharging stations. Curbing Cars, The New eBook From Forbes Curbing Cars: America’s Independence From The Auto Industry investigates why a growing number of Americans are giving up their cars. This illuminating account of our changing automotive habits is available now for download. Although the announcement itself was nothing exciting, it hints at something more interesting that comes up if you talk to ordinary drivers. By chance, I chatted with a friend who works in the automobile industry in Guangzhou, surnamed Liu, about the general acceptability of pure electric cars in China now. The first word he threw to me is "trouble." He added, "Many Chinese think like that." Regardless of the price of these cars, there are still a lot of pragmatic questions that have yet to be answered, he said. From recharging to durability to safety, carmakers have not yet provided much information on how to drive a pure electric car (be it a model from BYD or Tesla) like driving Toyota’s Corolla, Ford’s Focus or General Motor’s Buick. No one knows how long the transformation from petrol-engine cars to pure electric wheelers will eventually take place, but if the transition from pickup trucks to small family cars is any indication – we could be looking at about 20 years for the same dynamics to unfold. Detroit Electric car charging (Photo credit: Wikipedia) Liu, whose family just welcomed a baby girl, is a holiday driver. Driving his own car for work would only consume most of his commute time to sit in traffic. He said buying a pure electric car is not even a topic among his circle of friends. For ordinary car-owners, it boils down to practicality. Liu explained that whenever he or his friends think of the new technology that might expose them to some unresolved issues or hang-ups, they will immediately drop the idea out of hand no matter how appealing the underlying technology may be. Since Musk or Tesla started to garner so much attention from the Chinese media, many Chinese are like bystanders waiting to see if this American car firm can eventually succeed. But before that happens, innovators like Musk and his team may find that overcoming ordinary consumers' resistance will be a lot harder than building a few more charging stations. Follow me on Twitter @WongKandy and on Forbes.
bfb7929a1dbf5be594adc05c67dff2bc
https://www.forbes.com/sites/karagoldin/2018/03/22/why-entrepreneurs-should-never-stop-being-curious/
Why Entrepreneurs Should Never Stop Being Curious
Why Entrepreneurs Should Never Stop Being Curious Shutterstock Over the past year, “fake news” has become one of the most urgent issues we face. The deluge of lies, manipulations and political spin clogging up our social feeds has impacted election results, weakened our institutions and created deep rifts in society. Yet the struggle to distinguish between truth and “alternative facts” is not new. During the 1960s, the Sugar Research Foundation sponsored a report that cast doubt on multiple studies linking sugar to heart disease. It also cut funding for its own research that was yielding unfavorable results. These days we are more aware of sugar’s role in a range of health problems. But vested interests from large corporations continue to seek out alternative facts. It is our responsibility as consumers to look behind the headlines and the hype and determine what truly is in our best interest. How I learned to be more curious When I was struggling to lose the weight that I had gained after having my third child, I decided to stop drinking the diet sodas I consumed daily. Instead, I started making my own flavored water drinks with just fruit and the impact on my weight and overall health was almost immediate. This experience taught me to question a lot of the assumptions I had about the products my family was consuming. Just because something says “diet,” does that mean it’s good for me? If a product touts added nutritional benefits or less fat, what are the other ingredients? I started turning around all my labels and really looking at what was inside the package, not just what the package claimed it would do for me. Continually questioning the status quo is a common trait of successful entrepreneurs. Startups are often the result of a founder’s curiosity and willingness to look past accepted wisdom to find a fresh solution to an existing problem. Paying a late fee on movie returns was standard practice until Netflix turned that industry upside-down. Uber began after its founders wondered why it was so hard to get a taxi. After I suffered from a series of skin complaints, I discovered that oxybenzone – a common ingredient in sunscreens – could be the problem. I did some research and found that, despite a number of studies linking the chemical to allergic reactions and other harmful side-effects, oxybenzone remains FDA-approved thanks to industry lobbying. As a result, my company developed an oxybenzone-free sunscreen that was also enjoyable to wear. This could be seen as an unusual move for a business that started out making beverages. But my mission is to make America healthier. That means always asking whether the things we put in and on our bodies are actually good for us. And if not, what are we going to do about it? Turning curiosity into action If you’re a budding entrepreneur, being curious and asking tough questions is a good start. But the key trait shared by all successful founders is a willingness to disrupt established industries. When Anita Roddick founded The Body Shop, her goal was to bring an ethical approach to the cosmetics industry. She only used natural ingredients in her products and recycled all packaging. The Body Shop was one of the first cosmetic companies to stop using ingredients that had been tested on animals. Roddick also questioned the false claims made by other products and the idealized version of beauty they promoted, saying in her book, Business as Unusual: “We sell cosmetics with the minimum of hype and packaging and promote health rather than glamour, reality rather than the dubious promise of instant rejuvenation.” The Body Shop tapped into growing ethical concerns among consumers to become a global success and the brand has influenced an entire generation of natural cosmetic products. When I started a company in the beverage industry with no prior experience, industry veterans were quick to tell me how things have always been done and that I wouldn’t succeed doing things my way. When I asked the simple question, “why?” I was often met with blank stares. Accept advice, but always ask why and find your own answers. After fake news stops being a trending topic, I hope genuinely curious people will continue to question what they are told. You need this kind of curiosity to be an entrepreneur. But you have to combine it with a refusal to accept that things “have always been that way” and the drive to create new products or services that solve real problems.
d672d8a8b4312d338c1b08acc5629811
https://www.forbes.com/sites/karaladd/2019/02/08/human-design-erin-claire-jones/?sh=46417270586e
Inside The Art Of Human Design With Erin Claire Jones
Inside The Art Of Human Design With Erin Claire Jones Erin Claire Jones at a speaking engagement in New York City. Inna Shnayder. The prevailing mindfulness movement is much more than meets the eye. Rather, it is the self-discovery of “I,” — your path and purpose and everything in-between. Sure, you can do yoga, meditate and drink medicinal mushrooms tonics as much as the next wellness junkie, but when it comes to introspection, the art of human design is the modality of the moment. I sat down with expert Erin Claire, acclaimed Human Design Guide and Leadership Coach based out of New York City’s high-vibe coworking enclave, The Assemblage, to learn more about what this artistic diagram and self-development medium is all about. What is human design? Erin Claire at Assemblage Co-working Space, New York City. Inna Shnayder. Consider human design your manual to... well, you. Founded by media executive and art enthusiast, Ra Uru Hu, the ancient-meets-modern-science is a synthesis of spiritual studies including, but not limited to: astrology, the I Ching, the Chakras, and the Kabbalah. Hu first found the system in 1987 in the heart of Ibiza, but it is studied worldwide today. “Human design gives us information about ourselves that we can’t access anywhere else...It’s about how we are each uniquely wired,” says Claire. She further defines it on her website as, “A system that sheds light on your emotional, psychological and energetic makeup, giving you the self-awareness and tools to align with your nature and step into your highest potential in every area of your life.” How it works. A human design chart. Jovian Archive. Similar to your astrological birth chart, every person has an individualized human design chart that illustrates your distinct blueprint. Together, a myriad of shapes, lines, symbols, and rainbow colors create an artistic canvas of your human makeup. There are four traditional human types in human design — generators (70% of the population), projectors (20% of the population), manifestors (9% of the population) and reflectors (1% of the population). Every human being falls into one of these groups. Find out your human design type and read at-lengths about what each type encompasses  here. How it applies to your intrinsic personality. To clarify, your human design doesn’t definitively lay out your life path like a psychic reading. Alas, the art educates you on your aligned behaviors, patterns, and personality traits so that you can leverage them to achieve self-actualization. Most of us get stuck confined to a mold  that society and our surroundings have likely conditioned us to create. “Other systems like Myers Briggs and Strengths Finder are based on answering questions on who we think we are or aspire to be...rather, human design looks at both the conscious and unconscious ways we operate,” explains Claire. It’s both an affirmational and transformational tool that allows us to understand the relationship between our intrinsic motivations and external actions. We all can relate that we live a world of fleeting hyper-connection, juggling multiple screens at once and multitasking at the speed of light, yet where is the self-introspection? Human design is a tool that enables us to slow down, reconnect, and study ourselves — screens aside. How it applies to your extrinsic life — personally and professionally. As you dive into your human design chart, you will discover what makes you tick (and not) in every facet of your life — from your friends, family, and love life to your career, hobbies, and self care routine. “Many clients expect the session to be esoteric and are blown away at how relevant the session is to their current lifestyle,” says Claire. The medium can help bring about greater intention, fulfillment, and sustainability to every experience and relationship. “I’m most excited about bringing human design to leaders and business,” says Claire. “It bridges the (spiritual and non-spiritual) worlds because it’s so grounded and applicable,” she says. Erin Claire has positioned herself as a career-catalyzing innovator in the working world, framing the artistic modality in a digestible way for employees to leverage both individually and as a team. This intention is an initiative that a lot of companies today are lacking as industries are consistently adapting to new strategies to survive. Claire’s professional approach to human design seems to fill a clear human resource gap that the modern-day working world has been blind to all along. “I expected a lot of skepticism from companies, but most people are just eager to learn about themselves more so now than ever.” Who would've thunk that this human art diagram would spur so many ground-breaking ideas and techniques that are paving the way towards a more positive, aligned, an accomplished world? It's time to stop typing on your device, and start tapping in with human design. To learn more about human design, Erin Claire Jones and her explore her offerings, visit erinclairejones.com.
7d28ff68c56e629fb828bec17b981f71
https://www.forbes.com/sites/karanmehandru/2020/03/16/coronavirus-new-normal-for-startups/
Coronavirus Changes Everything: Five Strategies To Help Your Startup Find Its 'New Normal'
Coronavirus Changes Everything: Five Strategies To Help Your Startup Find Its 'New Normal' Startup life is always frenzied, but over the last few days the frenzy has escalated to epic proportions. Not since the horrific events of 9/11 have I seen social and professional conversations so universally and completely monopolized by a single topic. Coronavirus is an economic and social sledgehammer that has descended upon all of us. I’ve spent the last few days on countless calls with portfolio and non-portfolio CEOs offering whatever help I can as they navigate what my peers at Sequoia termed a “black swan” event. Here are five strategies I’ve been suggesting to CEOs. Accept the problem. There’s no time for denial. When a disaster takes place, there’s always some percentage of the population reticent to accept it. In the last 72 hours the founders I’ve spoken with have varied greatly in their level of acceptance. Some initially felt confident that their businesses would be insulated or might even see an uptick. Others started seeing negative ramifications to their businesses right away and have been adjusting appropriately. Over the span of a couple of days, they’ve all come to recognize coronavirus for what it is: an unexpected seismic force that will upend every element of society. While it’s true that some businesses will suffer more direct, severe and immediate impacts than others, none of us will emerge from this pandemic unscathed. The sooner startup leaders move past denial and start mitigating the scope of the impact, the better off they and their many stakeholders will be. Reassess business as unusual. I serve on 10 company boards. Each one had its 2020 operating plan approved in January or February. Today each of those plans must be re-evaluated. By nature, most startups (like humans) have a tendency to overestimate what they can accomplish in a year and underestimate how much they will accomplish in five. Right now, many startup operating plans overestimate top line growth while underestimating the pandemic’s impact. This double whammy must get baked into revised operating plans for 2020 and beyond. Remember that the risks of being underprepared far outweigh any downsides of overpreparation. Make your revised operating plans more conservative than you think is necessary. In other words, plan for the worst and hope for the best. Sales cycles will lengthen, and average deal sizes will shrink as both consumers and businesses become more reticent to spend. If you sell to SMBs you will be forced to offer more lenient terms or risk losing them as customers altogether as they struggle to retain solvency. Companies selling physical products will see their supply chains disrupted, resumed and disrupted again as new outbreaks emerge. Startups relying heavily on in-person sales efforts will also take a hit, given travel restrictions and the likely adoption of remote work among their customers. These companies must master the art of digital selling. Calls and video conferences are imperfect replacements, so entirely new sales processes must be established. Even software companies that sell bottoms-up through user-driven adoption will be impacted. We all will be. Plan accordingly. Spend as much time discussing the tactics of tomorrow and next week with your team and boards as you did discussing your strategy and plans for next quarter and year. Be the leader your team is craving right now. Winston Churchill said, “Things are not always right because they are hard, but if they are right one must not mind if they are also hard.” It’s in moments of crisis that leaders reveal their true mettle. Nothing matters more than people. Show it. Start by mandating telecommuting. Now. Safety must always come first. If you can afford to keep paying contract workers whose jobs can’t be done remotely, do so. If you can afford to offer paid sick leave to contractors, do that, too. VCs often advise founders to raise more money than they anticipate needing just in case the unimaginable occurs. Well, it’s occurring. These are exactly the sorts of unexpected expenses intended for cash reserves. Rest assured that no decent investor will ever punish you for incurring costs from doing the right thing. Right now there’s systemic empathy built into the ecosystem because this crisis is happening everywhere in every country and across every business sector. The decisions you make now will always be judged in context. Do the right thing. Overcommunicate. Showcase understanding and empathy. Your employees are afraid right now. So are your customers, partners and vendors. Even if you are physically distant, be visibly and emotionally present--no easy task among suddenly distributed teams. Communicate and then communicate some more. Conduct company and department-wide video conferences; aim for as many as 4 times your normal rate of Stand Ups and All Hands. Replace assurances dependent upon your physical presence with frequent words of affirmation. Use emphatic language to communicate your support and enthusiasm over calls, texts and email. And be sure to show empathy and flexibility. That guy whose kids are now tele-schooling may not be as productive as before, but he’s trying his best. Same with the woman who was already caring for an elderly parent; if it was tough for her before, it’s exponentially more stressful now. Actively verbalize your understanding, and let them know you support them during this trying time--and always. Tighten belts humanely. Many startups won’t survive this recession, and many families across the world will face unprecedented economic and social pressures. Yet as the CEO of your company, it’s imperative for you to keep the moral high ground while fighting the pressures of slower sales or even insolvency. Make the tough decisions now to give yourself the runway to survive. If you’re leading a company in the enviable position of having enough cash to weather this storm, this is the time you dip into the balance sheet to help support the hundreds or thousands of families who continue to write code, support your customers and put out product releases while operating within makeshift “offices” setup in their garages or bedrooms. If you’re running a company in which cash is already a scarce asset, start by reducing hiring or at least time-shifting new hires to the second half of 2020 for now. Plan to resume hiring whenever the economy starts to stabilize. Next, move to cut discretionary costs that encourage only incremental growth. A lot of marketing and sales campaigns and events fall into this category. Engage every part of your team to find creative tactical responses to the pressures you’re facing as an organization. Finally, depending on your cash situation, you may have to consider layoffs. If you have to take that route, always be generous with severance and outplacement packages, and consider voluntary layoffs which enable people to leave on their own terms. Don’t just survive. Thrive. The months ahead are going to be really freaking tough. But you’re going to be amazed at the innovation and creativity that come from scarcity and reframed perspectives. Since in-person meetings with sales people are no longer an option, more teams are going to hunker down and create products customers truly love. Workflow automation will take center stage for almost every company that’s selling products to enterprises or consumers. There will be spikes in usage across a lot of products and services, many of which will return to pre-COVID-19 levels once we’re on the other side. This period will also establish a “new normal” across a plethora of products and services and across brand new personas, which will pave the way for non-linear growth and the expansion of new markets. If you have the funding runway to do so, a recession is also a great time to make significant investments in core technology infrastructure and R&D, which can be hard to justify when times are good and that money can be efficiently spent on top-line growth. Many great companies have emerged from downturns stronger than ever. So have great ideas. In 1665 the University of Cambridge closed due to the Bubonic Plague, and its student Isaac Newton was forced to work at home for over a year. He used that time to work on mathematical problems which led to early calculus, and there, sometimes sitting under a now-famous apple tree, he developed his seminal theories on optics, gravity and motion. Maybe your company’s own “apple falling” moment lies in the months ahead. Coronavirus isn’t something we’d have wished on anyone, but now that we’re here, let’s use this as an opportunity to hunker down, emphasize and rely on our own humanity, focus on what truly matters and see what magic emerges. But first, wash your hands with soap!
9ad575e62f662fc55fbc1ca01b5b7b1f
https://www.forbes.com/sites/karastiles/2017/10/11/three-tech-companies-share-their-flexible-work-perks/
Three Tech Companies Share Their Flexible Work Perks
Three Tech Companies Share Their Flexible Work Perks Many companies offer out-of-office perks to attract top talent and enable staff to recharge. According to the Society for Human Research Management’s Employee Benefits Survey in 2017, 5% of US companies offer their employees paid sabbatical programs, 12% offer unpaid sabbatical programs and 22% offer paid time off for employees to volunteer. Here, leaders from three different companies share how time off keeps their teams engaged. 1. The company: Bionic Solution The work: Bionic aims to help large companies trigger a “startup ecosystem” and introduces tools to help them grow and innovate. The perk: The “August from Anywhere” program encourages staff members to work remotely for the month. What leadership says: “This was something that was intrinsic to the fabric of the founders, the culture of Bionic and what works for our cycle. So if you're another CEO, you have to figure out what's organic to your company. Maybe it's four-day weekends every month. Maybe it's summer Fridays. Figure out what that thing is that works for your business, but is also natural and valued by the people that you're with.” —Christina Wallace, VP of Growth 2. The company: VMWare The work: VMWare builds virtualization software and cloud computing technology. The perk: After five years of service, employees can use the company’s “Take 3” benefit and opt for three months outside their current role to pursue a self-designed professional project. What leadership says: “Keeping people connected and at the same time allowing them time to refresh and revive themselves is super important, especially in the technology world. It's so fiercely competitive and people work hard, so we are looking for ways that our employees would want to stay.” —Richard Lang, Senior VP of HR 3. The company: Zillow Group The work: Zillow Group runs an online real estate marketplace that includes Zillow, Trulia, StreetEasy and other property databases. The perk: The “Recharge & Reboot” program allows staff members to opt for a six-week sabbatical—three weeks paid, three weeks unpaid—after six years with the company. What leadership says: “Zillow Group believes the best employees are engaged employees. There are a lot of different ways that you can engage them, but in thinking about their connection to the world that's important to them—their families, their communities—you can have a significant impact on employees’ morale. If you say your employees are your most valuable assets, really make sure that you have policies that support them in their most valuable pursuit, which is managing their own personal life." —Dan Spaulding, Chief People Officer
82f2b9d53fe62ef71f58885278cb373a
https://www.forbes.com/sites/kareanderson/2012/08/31/the-secret-to-staying-sought-after/
The Secret to Staying Sought-After
The Secret to Staying Sought-After Do some people stop listening before you stop talking? Consider this. When stressed one symptom is that it is literally harder to hear other people. And anyone who says they don’t feel fearful sometimes in the face of this wildly uncertain economy is in deep denial. That’s a signal to savvy, caring people who want to stay sought-after. Learn exactly how to listen sooner, deeper and longer instead of talking at others, as some research shows we are increasingly doing. Stand out by clearly listening in these specific ways. 1. Practice Connective Listening Only then can we possibly discover which problem keeps our customers, colleagues and friends awake nights. And solving that one their hottest concerns or serving one of their biggest dreams is the most thoughtful and promising way to deepen their loyalty and trust, a two-way street. After all, why should they make you a priority if you, with haphazard attention to them, seem to make them simply an option? And, in this increasingly complex yet connected world, you can’t be an expert at everything. You need diverse friends and allies who care enough to, not only provide insight and contacts when you ask for it, but also before you even know you need help, nor recognize that they can help you. Proactive noticing and listening and pertinent support provide your surest path to earning that kind of trust and loyalty.  That’s why this brief primer on connective listening may help you stay relevant and sought-after. 2. Sequence Your Suggestions When someone is telling you something it often reminds you of a similar story, and you prepare yourself to respond. Don’t. Keep listening. And when they are done with their point, follow up on it. Don’t revert the conversation back to you. Just Listen further, deeper. Then you will have a better idea about the right bits of relevant information in the order they most want to hear them.  Thus you collaborate with them into buying. 3. Are you a Deep Listener, Out-Talker or Somewhere in Between? One way to recognize if you approach to swaying others is as a thoughtful listener or conversation monopolizer is to de-brief with yourself right after your next interaction where you did want to influence someone. Hint: Who did most of the talking? As in fishing, until you find the hook that most grabs their attention so they want to know more it is highly likely that you won’t connect and they will get away. This is true, by the way with someone you love, dislike or just met. 4. Ask What May the Most Valuable Question You Can Ever Ask When you want to get closer to someone, probe deeper into that individual’s underlying interest or concern, and demonstrate that you care, ask this deceptively simple follow-up question, “Tell me more about that.” 5. In Some Ways Mimic a Popular Child As infants most of us were rewarded with wide smiles and warm voices when we talked. Later we enjoyed more reinforcement for talking as we learned to read. Beginning in kindergarten, we’ve been rewarded to sit still and be quiet. Yet, even when we do, but we aren’t trained to listen. Yet we are expected to know how. As we grow older we may hunger to be heard and understood yet not learn to listen. We talk until they go on a mental vacation then physically leave. “It is the province of knowledge to speak and it is the privilege of wisdom to listen.” ~ Oliver Wendell Holmes In this increasingly connected yet complex economy competition can hit faster and from more places. That’s all the more reason to listen closely to diverse people. You’ll be better able to serve your customers and to identify valuable allies with whom you can generate standout value in your mutual market – perhaps becoming the top-of-mind choice. 6. Triangle Your Way to Bonding Forge the surest, deepest connection by speaking to the strongest sweet spot of mutual benefit. Now that you’ve listened deeply and have a sense of something that’s important to them, take three steps to triangle closer to them: Step 1 You (Addressing the other person): Speak specifically to their strong interest, then Step 2 (Us): Indicate that you share that interest Step 3 (Referring to yourself):  Describe why it matters to you See more about the power of Triangling in a book I wrote years ago, called Getting What You Want, which is, ironically, not the title I wanted. Then you both are primed to continue a conversation about how we can best connect and go farther together around that shared interest. It may be a matter of selling, cross-referring, co-creating, cross-consulting, mutual mentoring or other form of collaboration. In business that can lead profitable partnerships with complementary companies that serve the same kind of customers as you. Collaborating with other businesses in this way is often the most credible and cost-effective way to stand out from your competition – a priceless possibility in this bad economy. “If speaking is silver, then listening is gold.” ~ Turkish Proverb 7. 15 More Ways to Increase the Impact of Your Listening-to-Connect To increase your chances of strengthening connections in more interactions, consider these pointers: 1. Control outside interruptions and distractions. 2. Where possible meet in a place that is not noisy, where seats are comfortable and where you can sit at a right angle, “sidling”, rather than across from them. 3. Avoid patterned shirts, blouses or other distractive clothing especially on the upper half of your body. 4. Get your whole body involved in listening and show that you are paying attention. Look the person squarely in the eye most of the time, using facial expressions and other non-verbal clues to show that you hear and understand what she is saying. 5. Open your eyes, mind and ears to be truly receptive to the messages the other person presents – both by what they say and what they avoid saying.  Begin listening from the very first word and give the person your undivided attention. 6. Lean slightly towards them, look them directly in the eye, nod sometimes and do not fidget. Avoid frequent rapid movements of your arms or legs. You are demonstrating your attention – making the other person the center of attention. 7. Focus on what the person is saying right now. Avoid trying to figure out what she is going to say; you may miss what she actually means. 8. Don’t interrupt. It sends the message that your views are more important than theirs. 9. Confirm your understanding of what they said, using their words. Don’t paraphrase. 10. Ask follow-up questions to clarify and to glean the specific benefits they seek or the problems they want to solve. 11. Take notes. It demonstrates interest and respect and enables you to recall exactly what was said. When you take notes you triple the amount you remember – even if you do not look at them later. 12. Be direct in answering questions. First answer. Then elaborate – not the reverse, which is considerably more common. Don’t give qualifiers and background before answering. That’s underbrush they must wade through. You will seem evasive or thoughtless or both. 13. Remain genial and receptive. Do not react negatively – even and especially to highly charged words and tones. Hear the person out, then respond. Don’t change the topic. Most people will cool down and begin to talk calmly once they vent their anger and frustrations and feel heard. 14. When the other person gets more intense – negatively or positively, she is discussing what most matters to her.  That’s your hook.  Offer the specific benefit – the solution to that point to move her closer to you or the action you want her to take. “Every moment counts, and that moment is lost if you’re not in that moment 100 percent.” ~ Tachi Yamada, M.D. 15. Look for connections between apparently isolated remarks. What’s the underlying theme, the hottest thing that most concerns them? “To truly listen is to risk being changed forever.” ~ Sakej Henderson The bonus?  The more strongly that person connects with you the more likely they will emulate your behavior, tell others and extend your presence to their friends and the friends of their friends. “Listening is a magnetic and strange thing, a creative force. The friends who listen to us are the ones we move toward. When we are listened to, it creates us, makes us unfold and expand.” ~ Karl Menninger If you’d like to discover more ways to bring out other’s better side so they see and support yours, consider reading: Quiet: The Power of Introverts in a World That Can’t Stop Talking, Moving From Me to We: Turn Your Life Into the Adventure Story You Were Meant to Live With Others, Crucial Conversations: Tools for Talking When the Stakes Are High, Being Wrong: Adventures in the Margin of Error and Influencer: The Power to Change Anything. Coming up in my future columns:  The End of Men and the Rise of Women, Socialized, Get Satisfaction, Participant Media and Fremantle Media: Stories of Social Entertainment Convergence, The Wisdom of Psychopaths, Jeffrey Harmon’s video-as-key-biz-growth strategy, Timeless Truths About Lying, Talk Inc., Daring Greatly, Well Said!, Paid to Think, Weird Ideas That Work, Be the First All Online Business in Your Niche,  and Jimmy Soni and Cato’s Lesson for a Divided America. You may have noticed an underlying theme of quotability and connectivity… or I hope that you have, as this is the ostensible theme of my column. I’d be delighted if you joined in the conversation with your examples and ideas by commenting here and via Twitter: @KareAnderson Read More: 10 Communication Secrets Of Great Leaders 15 Ways To Accomplish More With The Right Kind Of Humor This One Leadership Quality Will Make Or Break You
050a7f4cd845fd5e79bc0dad4b6371aa
https://www.forbes.com/sites/kareanderson/2012/09/12/insight-from-massive-social-experiment-could-sway-voting-spending-and-other-behavior/
Insight From Massive Social Experiment Could Sway Voting, Spending and Other Behavior
Insight From Massive Social Experiment Could Sway Voting, Spending and Other Behavior Within minutes of taking questions, Karl Rove and Howard Dean had a largely well-informed audience deep in the weeds of their conflicting, complex facts last Monday night at our Marin Speakers Series.  We watched enthralled, attempting to keep up with their swift verbal combat. This was a meatier discussion than we usually get via media coverage and the truthiness-packed saturation TV ads that are blanketing the battleground states. Our Bias Bond Us for Good and for Bad The topics they covered are innately complex, from health care to the fiscal cliff. Two women popped up early with a banner calling Rove a war criminal and were promptly escorted out. Two other women vigorously nodded whenever Rove spoke and others clapped when Dean talked.  As many social science studies show, we look for ideas that reinforce our bias and usually hang out with people who share our values and views. When we don’t have a strong view on something, we turn to people we trust to help us decide. Those notions are supported by the results, announced today, of a massive social experiment that will have far reaching affects. The study showed that about one third of a million more Americans voted in 2010 because of one Facebook message on Election Day. My Friend, Who Are You Voting For? The experiment involved 61 million Facebookers, and led to the creation of an algorithm that, if used by either political party, might tip the scales of this presidential election. Yes, that’s a staggering conclusion. The discovery may also alter target marketing, and word of mouth and cause campaigns. It may even boost Facebook’s stock price. It will certainly affirm the inordinate value of our close friends, especially those we know online and in the real world. Consequently our collective and individual behavior in all these spheres, on line and in real life, may morph more swiftly as people tinker with the results of this study. Connected co-author and UC San Diego social scientist, James Fowler and his colleagues, including Facebook research scientist, Cameron Marlow, used the 2010 Congressional election for his massive “get out the vote” social engagement experiment, dividing participants into three groups: Group One More than 60 million Facebook users saw a non-partisan “Today is Election Day” message at the top of their news feeds on Nov. 2, 2010, reminding them to vote. It included a: • Clickable “I Voted” button • Link to local polling places • Counter displaying how many Facebook users had already reported voting • (Most importantly) Up to six profile photos of users’ own Facebook friends who  had clicked on that “I Voted” button. Group Two About 600,000 people, or one percent of the group, were randomly assigned to see the same message but without the pictures of their Facebook friends. Group Three Another additional 600,000 individuals received no Election Day message from Facebook. They were the control group. What Social Action Actually Motivates Us to Act? Yet here’s where it gets really interesting: • Users who received the message without photos of their friends voted at the same rates as those who saw no message at all. In other words, in this study, a social call to action, alone, has no effect. • Those who saw the photos of friends in the message were more likely to vote. Even self-descriptions as liberal or conservative made no difference. Simply put, friends’ photos made all the difference. Fowler concluded, “It’s not the ‘I Voted’ buttons, nor the lapel stickers that gets out the vote. It’s the person attached to it. Social influence made all the difference in political mobilization.” Consider the capacity to scale political, business or cause campaigns by asking your customers, fans or cause backers to reach out to their close friends, suggesting they also contact their friends. This 2010 experiment spurred an additional 60,000 people to vote, according to the researchers. Then contagion kicked in. The social contagion among friends – they say, yielded another 280,000 more, for a total of 340,000.  According to Fowler, “the social network yielded an additional four voters for every one voter that was directly mobilized.”  Of course, a dire warning for Facebook and other social networks is the possibility that a social network effect can work in reverse. Close Friends (and Their Friends) Most Move Us to Act Even more valuable than the fabled Kevin Bacon notion that we are all eventually connected through six degrees of separation, is this discovery of the disproportionate power of the first two degrees of connection. Of course, like our tendency to go into a busy restaurant more than a vacant one, we are swayed by the visible crowd, the number of people who have already done something. As Robert Cialdini found, that’s the power of social proof. Yet for attracting votes, customers or cause backers, Fowler’s experiment demonstrates how swiftly and certainly we can scale action through the involvement of “just” two degrees of separation. Imagine if either presidential campaign chose to reach out, via Facebook to those who have already declared themselves avid backers, and used the algorithm (or created their own) While the message-per-friend amplification is small, scaling is huge. Multiply a small effect across millions of users and billions of online social network friendships you can reach huge numbers of people. We Respond Most to Those We Meet Online and in the Real World Now this is where it gets really interesting. The researchers also learned how to confirm close friendship links. They asked some users who were their closest friends. Then they measured how often they interacted on Facebook.  In so doing they developed an algorithm by which they could predict with 80 percent accuracy which Facebook friends were also close friends “in real life.” Those close relationships accounted for virtually all the difference in voting. Imagine if either Obama or Romney’s team adapted this approach, beginning by reaching out to their known supporters on Facebook. Supporters could be invited to ask their ten closest friends to join them in clicking on a “We’ll Vote for (candidate’s name)” button which would trigger images of their closest friends appearing with them on their Facebook Timeline, connected to that button. Or, even more wildly, what if both campaign teams hopped on what I’ll dub the “Two Degrees to Win” approach? That would be fascinating to follow.  Since it’s estimated that a whopping $6 billion will be spent on this campaign battle, the extra amount to risk on such an experiment would probably be a drop in the bucket. Of course if some big corporate backers financed it, they would learn, first hand, how to hone the process.  With that scale and speed of practice with their social experience they could then jumpstart their own appeals to customers to attract more customers through their friends and friends of their friends. That could an historic first, with all kinds of competitive, cultural and relationship implications for us as humans, consumers and business leaders.  The power of our two degree close relationships provides us with a fresh opportunity to accomplish greater things together than we can on our own. Audacious Update Margaret Mead’s Famous Proclamation She famously wrote, “A small group of thoughtful people could change the world. Indeed, it's the only thing that ever has.” In fact, in our increasingly complex yet connected world, it gets ever better. Small groups of closely connected people can scale their world changing idea faster and farther than ever before. What’s Next from Fowler and His Colleagues? “The main driver of behavior change is not the message – it’s the vast social network,” concluded Fowler.  “Whether we want to get out the vote or improve public health, we should not only focus on the direct effect of an intervention, but also on the indirect effect as it spreads from person to person to person.”  Consequently Fowler is excited about the extraordinary Big Data gathering opportunity inherent in Facebook’s Timeline.  With it he and other social scientists hope to understand more kinds of behavior,  using that data for the greater good. His research covers topics as disparate as happiness, health, ideology and job hunting. Discover more about Fowler’s grand experiment in the cover story at Nature magazine. Learn more about social networks-related contagious capacity to move us to take action: How Much Will Social Media Really Affect the U.S. Election? By Frank Strong How Facebook ‘Contagion’ Spreads by Sarah C.P. Williams Social Networks and Happiness by Nicholas a. Christakis and James H. Fowler INFOGRAPHIC: Breaking Down Romney’s And Obama’s Facebook Fans by Justin Lafferty
980e3bcb5cab3b0c75cfd443b5144ad2
https://www.forbes.com/sites/kareanderson/2013/01/02/bring-out-their-best-side-to-be-happier-together/
Bring Out Their Best Side to be Happier Together
Bring Out Their Best Side to be Happier Together #One Tap the Little-Known Secret to First Impressions for Building Likeability Do people stop listening before you stop talking? Vala Afshar intuitively practices a little-known secret for attracting talented people as friends and colleagues. It’s an obvious truth, once stated. I saw it vividly demonstrated at Pivotcon. Via Twitter, I noticed how he specifically cited others’ insights and accomplishments. Yet it was only in seeing understated Afshar in the packed reception that I saw how people were drawn into his warm orbit.  In the midst of this active crowd, with fast-paced conversations, he was able to bring out two essential parts of each person with whom he spoke. In his presence they exhibited their: Strongest skill, tied to a passionate interest (Talent) Most becoming side (Temperament) How?  Vala consistently made authentic, concrete references to the traits others most liked about themselves. Further, he asked the questions and follow-up questions that enabled them to display their remarkable knowledge and favorite ideas.  Of course, they wanted to meet up with him again. Here’s the counter-intuitive secret that connective Vala was practicing: Our first instinct to like you (and want to be around you and help you) happens, not from how we feel about you, but rather how you make us feel when around you. From that good feeling about ourselves, in your presence, we project onto you the qualities that we most like and admire in others even if you have not demonstrated that you have those admirable traits. The dangerous flip side is also true: If we don’t like the way we act when around you we will see in you the traits we most dislike and fear in others. That Dislike Response happens quicker, is felt more intensely and lasts longer than the Like Response. 2 Hints: 1. Pivot to supporting their happiness right now: Whenever you notice that someone looks even slightly uncomfortable or otherwise unhappy when around you, make it your top task to turn the conversation to something they like in themselves, excel at or have a passion for doing. 2. Be smart about how you “help” others get better: People become more talented and happier when then have the opportunity to hone their skills rather than feeling forced to fix their weaknesses, according to research cited by Tom Rath, Donald O. Clifton and Marcus Buckingham. This specific approach to meeting and re-meeting others encompasses the four traits that Tim Sanders recommended in The Likeability Factor: friendliness, relevance, empathy and realness yet with the focus on bringing out those qualities in them first so they are more likely to see those traits in you. #Two Avoid the Dangerous Relationship Downfall Ratio Every time you are starting to engage with a stranger, loved one or anyone in between, consider the magic ratio for making relationships strong. For marriages to thrive that “magic ratio” is five positive couple interactions to every negative one according to John Gottman’s famous research, and it’s bound for divorce when the ratio sinks to 1:1. More stringently yet, Positivity author and researcher, Barbara Erickson recommends a three to one ratio of positive to negative experiences. When your ratio with someone is sliding the wrong way, think of opportunities to boost it, perhaps using # One. #Three Shift the Role You Play in Your Life Story and in Others’ You have plenty of opportunities to positively alter others' perceptions of themselves and of you, with this "bring out the best in them first" approach towards your daily involvement with others. That’s because experience some 20,000 individual moments in a waking day, some of them life-changing, even if most last just a few seconds, according to Nobel Prize-winning scientist and author of Thinking Fast and Slow, Daniel Kahneman. In fact, one of the most toxic effects on our well-being is our belief in the inordinate importance of our successes and failures on the possibility of happiness in the future, according to Sonja Lyubomirsky in her book coming out tomorrow, The Myths of Happiness. Perhaps you can shake that belief, for yourself and for others, by creating situations in which you all get to use best talents, working on project that reflects a strong sweet spot of shared interest. In so doing you may play a different character role in the story that unfolds for you both and make the storyline more adventuresome and satisfying. As Lyubomirsky writes, “…Human beings are remarkably resilient, with the capacity to turn traumas into assets and bad experiences into growth experiences…” Like Stumbling on Happiness author Daniel Gilbert, Lyubomirsky believes we are not adept at foreseeing how happy we will be in the future. Yet we can become more adept at genuinely supporting others best side so they are more likely to see and support ours. In so doing we strengthen relationships and increase opportunities for shared happiness, accomplishments and a meaningful life. Uneven as our attempts will inevitably be, these seem like bountiful rewards, because we all yearn for them.  As Gilbert wrote, “Our brain accepts what the eyes see and our eye looks for whatever our brain wants.” Why not reach out to one another to grow this resilience with together, and perhaps grow a new, true friendship? "A true friend knows your weaknesses but shows you your strengths; feels your fears but fortifies your faith; sees your anxieties but frees your spirit; recognizes your disabilities but emphasizes your possibilities." ~ William Arthur Ward
7328acf8350ff5ece50c32c1fb3fc9a9
https://www.forbes.com/sites/kareanderson/2013/01/13/how-lance-helps-us-avoid-our-temptation-to-lie/
How Lance Helps Us Avoid Our Temptation to Lie
How Lance Helps Us Avoid Our Temptation to Lie After nearly 15 years of vehement denials, Armstrong may own up, it is rumored. He promises he’ll answer Oprah’s interview questions "directly, honestly and candidly.” Yet the January 17th interview is already a very public, "social" and even global event that includes a newspaper ad suggesting the questions that Oprah should ask.  He has alot at risk. ” Like watching a kid actually pee in the pool rather than imagining how many people have, the stark reality of seeing Lance Armstrong admit to Oprah that he was doping, if he does, will hit hard. That’s what Dan Ariely’s research indicates. He’s the author of The (Honest) Truth About Dishonesty. When what was long rumored to be true becomes real, especially seeing someone admit it in real time, our feelings are more intensely felt and contagious.  Here are some very human lessons we can learn, from Lance’s situation, about the slippery slope of deceit, alleged and otherwise. 1. Soon after you do some small thing wrong beware of the stories you start telling yourself about it Sometimes not knowing something for sure or rationalizing our behavior is emotionally easier, at first, yet may become a more destructive habit later, suggests Ariely. Like actually seeing a kid pee in the pool you are about to dive into, or believing the five-second rule of not eating something you just dropped on the floor, like that warm chocolate chip cookie. Who knows what stories those JP Morgan Chase managers told themselves when the deception started at the bank? Did they feel safer when banks’ reputations were tanking and CEO Jamie Dimon actually got the best title a banker could get at the time, “the least-hated banker in America”? Notes Ariely, “Now we have about three billion dollars to prove the contrary.” 2. Our delusion deepens as our cheating does “We all want explanations for why we behave as we do and for the ways the world around us functions. Even when our feeble explanations have little to do with reality. We’re storytelling creatures by nature, and we tell ourselves story after story until we come up with an explanation that we like and that sound reasonable enough to believe. And when the story portrays us in a more glowing and positive light, so much the better,” discovered Ariely via his experiments. Warning: Peter Guber, in Tell to Win, advises us to create “purposeful narratives” that inspire others to play a role in our story, and, in so doing, reshape and share it. Yet that advice has a dark side when the storyteller has been successfully deceiving others with it and many have succumbed to the allure to play an unwitting or unsavory part. 3. Fight the fudge factor Armstrong is charged with involving teammates and others with collectively organizing dope delivery and use, not with taking actual bribes. We are more tempted to be dishonest in situations where we can distance ourselves from the act. Writes Ariely, “the psychological distance between a dishonest act and its consequences creates a fudge factor of rationalization. Thus we are more likely to take computer paper home from work than money from a petty cash box. In an experiment, more MIT dorm students stole food from the dorm refrigerator than cash. Ariely worries that adoption of this fudge factor will become a more wide spread rationalization as we increasingly move towards cashless culture. “A long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings, writes Ariely. He calls this “balancing act” the capacity to rationalize The Fudge Factor. In fact, "Most people, when directly confronted with proof that they are wrong, do not change their point of view or course of action but justify it even more tenaciously,” suggests Carol Tavris and Elliot Aronson in Mistakes Were Made (but Not by Me). "Self-justification has costs and benefits. By itself it's not necessarily a bad thing. It lets us sleep at night. Without it we would prolong the awful pangs of embarrassment. We would torture ourselves with regret over the road not taken or over how badly we navigated the road we did take. We would agonize in the aftermath of almost every decision... Yet mindless self-justification, like quicksand, can draw us deeper into disaster. It blocks our ability to even see our errors, let alone correct them. It distorts reality, keeping us from getting all the information we need and assessing issues clearly." 4. When forced to give up a deception, become the (former) sinner who saves others Maybe Lance can pull off a Frank Abagnale pivot into a fresh chapter of his public life story. Abagnale turned his adventures as a clever con man into a Catch Me If You Can book that became a movie and Broadway play.  He transformed himself into a highly paid speaker and consultant who “reveals how he learned to live on the right side of the law.” Of course he had to spend five years in prison first. Armstrong already started down that redemptive path, founding the popular foundation for cancer patients, even if he recently had to leave it behind. 5. A wild idea for Armstrong’s personal and brand redemption Perhaps, Armstrong could join forces with Barry Bonds, Roger Clemens who just got blackballed from the Hall of Fame.  They could start down the path of re-branding them selves by ardently advocating measures to reduce the temptation for budding athletes to take performance-enhancing drugs – or otherwise cheat. That’s taking two lessons from Chris Christie unlikely allies playbook: boldly, at the risk of losing some allies, putting your core constituency first, in part by forging a problem-addressing and power-leveraging, perhaps temporary alliances with President Obama and Andrew Cuomo. Also, I’ll bet the much-respected co-authors of Help the Helper might find such a cause in keeping with the core message of their book, how to spur selfless behavior in support of a tight-knit, pro athlete team and actual boost team performance in so doing.  Along the way they could hone a new facet of their personality. 6. In a connected world it’s less likely you will get away with lying… for long It make take a long time yet, ultimately, the deception will catch up with you. The bigger the lie, the more viral the story, especially if you are a public person and, increasingly, we all are. Here are two recent, powerful examples of what I’ll dub the Boomerang of Public Betrayal: One:  Dangerous Cuban missile crisis lie According to Benjamin Schwartz’s chilling story in The Atlantic, more than anyone else, President Kennedy brought us the closest to nuclear war yet managed to make his role look heroic in setting up what was falsely dubbed “The Cuban missile crisis.” While leaders in many other countries were horrified at the depth of deception and the danger Kennedy and his brother sparked with that situation, most Americans thought he was protecting our country, until now. Our deception often causes collateral damage to others who willingly participate or feel forced to. There’s often only a burry line if difference, as they experience the cover-up choice. It must have been emotionally devastating for some of Kennedy’s closest confidantes to carry to cover-up to their grave.  Their need to rationalize their roles must have metastasized, damaging other parts of their lives, rippling out in to others’ lives. Two: Story revived by a documentary Over 20 years after Anita Hill was subpoenaed to testify in a packed Senate Judiciary Committee hearing about then-Supreme Court nominee Clarence Thomas’s sexual harassment of her, Sundance Film Festival has four sold-out screenings of “Anita” a documentary about that time. That means a whole new generation will hear the story for the first time, as older generations and the sitting judge experiencing it again. A warning: the long tails of good and bad reputations are likely to get amplified over time with the inevitable cross-pollination between traditional media like films and new digital media. "The big secret in life is that there is no big secret. Whatever your goal, you can get there if you're willing to work." ~ Oprah Winfrey From Oprah’s affectionate interview of Lance in 2004: Oprah: Don't you hate to lose? Lance: Yes, but if you get beat but you didn't make any mistakes and your preparation was perfect, then you realize that someone else was just better. I think I can live with that. Increasingly we live in glass houses in this ever more connected world so throwing life lines to others will get you farther than throwing stones at them. Most of all, remember, it is never to late to tell the truth to yourself and others. We always have the opportunity to: • Recognize sooner we are on a slippery slope towards greater dishonesty and alter our behavior •  Fess up to a secret to start on a redemptive path where some people, especially those most precious to us, may grow to trust us again • Support others in saving face and self-correcting, especially as they continue to self-correct. It will probably help us self-correct sooner too.
47cdcedb094f802f0fe87412e5a38080
https://www.forbes.com/sites/kareanderson/2013/06/16/find-a-festival-for-your-real-life-adventure-story-with-others/
Find A Festival For A Real-Life Adventure Story With Others
Find A Festival For A Real-Life Adventure Story With Others From the hypnotic Whirling Dervish to Holi, where India welcomes Spring, and the art of Burning Man, colorful festivals can be where we literally move together, learning about each other’s culture through ritualized dance and other play. That’s where Joie de Vivre Hotels founder, Emotional Equations author and avid world traveler, Chip Conley, is placing his next bet anyway. "The more virtual we get, the more ritual we need." 
~ Chip Conley New Movement to Melt Boundaries Between People Around the World In fact he’s dedicating the next decade of his life to creating a community where we participate in at least one festival a year, and share our experiences. Pledge: “I want to become more culturally curious and will attend at least one festival this year.” His Fest300 launches on July 10th, and he’s recruited kindred spirits who share his belief in “profound travel.” Art Gimbel, his editor-at-large, for example, “stumbled upon a mountain village celebration in Guatemala near Lake Atitlan that changed his life”, writes Conley. The magic happened when Gimbel was moved from observing locals to joining in their dancing. He feels that, “these festivals are living museums of cultural history.“ While, at age 52, Conley has already visited 55 countries, Gimbel, now 36, has been to more than 72. Already, in planning for Fest300, Conley became AFAR magazine’s festival correspondent. “The world is a book and those who do not travel read only one page.” ~ Augustine of Hippo Where Will You Go Play? When Fest300 launches you can peruse a growing registry of festival descriptions, and fill short profile to be matched to the festivals that may most match your interests. Membership is free. Each week, new festivals will be added, until 270 have been covered, then Fest300 will crowdsource the last 30, inviting members to submit their favorites for consideration. Conley hopes to generate for others, what sociologist Emile Durkheim called  “collective effervescence": ”the positive experience of losing oneself in a group ecstatic moment." From food to adventure and medical procedures, traveling to pursue a special interest has been an increasingly popular way to move past the observer role of “just” taking tours by bus or ship and to actually get connected with a culture in a more personal, fun and meaningful way. I’m sure that those who flock to the Sturgis Road Rally in South Dakota are somewhat different than the 100 million who gather for Kumbh Mela, or  for those coming to China’s Harbin Ice & Snow Festival, yet you will probably find kindred spirit as you participate. I, for one, would like to see one of the 10 Ligers that remain in the world. They are a cross between a lion and a tiger. Adventurers like Matt Harding and  Chris Guillebeau have popularized the notion of travel for the pure joy of exploration and adventure. Conley wants us to have the opportunity to share adventures with those we meet at festivals.  Perhaps you will discover a new interest or, like Matt, share one of yours, with others, in a place very different than where you live. You may find that some of your most life-changing adventures come from experiencing different cultures, first-hand, where people are celebrating together at unusual, perhaps even seemingly bizarre  festivals in parts of  the world where people don't look or act remotely like you, yet you discover universal yearnings and feelings of joy and camaraderie from the time together. “Travel is fatal to prejudice, bigotry, and narrow-mindedness.” ~ Mark Twain
ac9ff305a4889718ffab079940c8178b
https://www.forbes.com/sites/kareanderson/2013/07/28/how-shared-experiences-are-spurring-spending-loyalty-and-learning/
How Shared Experiences Are Spurring Spending, Loyalty And Learning
How Shared Experiences Are Spurring Spending, Loyalty And Learning What does the New England Patriots’ “rabid” fans’ active sharing mid-game comments via immersive Wi-Fi have in common with Peabody hotels’ guests’ avid videoing and picture-snapping of the daily duck walk? Or  parents standing in front a large store wall of bewildering choices in children’s car seats, looking down at their free Car Seat Helper app from Phoenix Children’s Hospital, comforted in knowing they’ll be able make a wise choice about their child’s safety? Spur Sharing to Solidify Connections, Centered Around What You Offer Each of those situations spur positive, shared experiences, shaped by organizations that seek to serve people better at the right time and in helpful ways. From the Gillette Stadium and sports team managers to the hotelier and hospital leaders, they all recognize  that these multi-step moments boost involvement, loyalty and the bragging rights that bring other customers and stakeholders closer too. Instilling bragging rights is dubbed The Ultimate Moment of Truth by Brian Solis in his newest book, What’s the Future of Business: “It represents the experience that people share after using your product and engaging with your company over time. Blog posts, YouTube videos, reviews, each in their own way direct people to take their next steps accordingly.” “Because I helped to wind the clock, I come to hear it strike.”
~ William Butler Yeats Solis see this approach as especially vital now because, “The connected consumer can become a formidable foe or ally for any organization. As such, the proactive investment in positive experiences now represents a modern and potentially influential form of consumer marketing and service.” I heartily agree.  I also believe that the most impactful, change-evoking experiences still happen in-person, as experienced by the duck-watching hotel guests or the sports fans, sitting by side, sharing with others near and far. “High Touch” Is Still Needed in An increasingly High Tech World Way back in 1999 John Naisbitt and Douglas Philips described the “fabulous innovations and devastating consequences of technology’s saturation of American society” in their book, High Tech/High Touch, a topic Naisbitt first raised upon even earlier in his prescient 1982 book Megatrends. We have an unalterable need to “connect with the physical world”, they wrote, and with each other, I would add. As Solis says in several places, “technology is just a tool.” Our desire, as social animals, regardless of temperament-- to feel known, appreciated and valuable  -- provides the motivation for social sharing, selling – and learning. Dr. Atul Gawande, would concur. He discovered that multiple, convivial in-person interactions are the most successful  way to spur behavioral change.  When writing about the slow adoption of live-saving medical measures, from anesthesia to hand-washing, the renowned doctor writes in The New Yorker, “In our era of electronic communications, we’ve come to expect that important innovations will spread quickly. Plenty do: think of in-vitro fertilization, genomics, and communications technologies themselves. But there’s an equally long list of vital innovations that have failed to catch on.” For example, he writes, “Every year, three hundred thousand mothers and more than six million children die around the time of birth, largely in poorer countries… Simple, lifesaving solutions have been known for decades. They just haven’t spread." He goes on to describe how teaching and/or distributing how-to medical pamphlets rarely work, nor do attempts to provide financial or other incentives. The only approach that spurs behavior change, he found, is what Everett Rogers advocated: “Diffusion is essentially a social process through which people talking to people, spread an innovation.” Gawande cites how a pharmaceutical rep uses “the rule of seven touches,” face-to-face interactions, so a doctor comes to know and trust the rep. Despite the remarkable discovery that a simple solution of water, salt and sugar could save many of the victims of the deadly diarrheal disease cholera, it remained difficult, for over a decade to get doctors and parents to use this cheap and easy-to-fix option. Yet, as Gawande recounts, a Bangladeshi non-profit, BRAC, was successful in spurring it’s use in villages by hiring individuals to go door-to-door and discuss it, semi-literate, using a “distilled” script with “seven easy-to-remember messages.”  The number of villagers they got to adopt this practice went up further, when they were paid, not by time spent but by the amount of adoption. Then the workers began revising how they taught, just as Silicon Valley-based start-up experts would advocate, making what Peter Sims dubs “little bets”  --  experimenting along the way, in what Steve Blank calls a lean approach. For example actually “coaxing villagers to make the solution with their own hands and explain the message in their own words, while a trainer observed and guided them, achieved far more than any public-service ad or instructional video could have done,” writes Gawande. “In the era of the iPhone, Facebook, and Twitter,” writes Gawande, “we’ve become enamored of ideas that spread as effortlessly as ether. We want frictionless, ‘turnkey’ solutions to the major difficulties of the world – hunger, disease, and poverty. We prefer instructional videos to teachers, drones to troops, incentives to institutions.” I think that is too sweeping an indictment as many of those deeply involved in advocating ways to change behavior, including to sell, are emphatic that the technology is a tool that is only helpful if the people you serve think that its helpful, as Solis suggests. It would be fascinating to see if even greater adoption could happen by blending the insights from Solis and others as the connectivity via ever cheaper digital devices grows, even for use in more remote areas. Trainers could train other trainers, in-person, using them. As well, trainers could use the devices in conversations with villagers. They might show images or short videos of peers and villagers doing the methods and sharing their success stories. In so doing, they could provide authentic, credible bragging rights for those who train the workers, and for the workers and villagers. They too, as Solis would say, might have their Ultimate Moment of Truth that could motivate more people to share their live-saving procedures with others. Storyboard A Sequence of Scenes That Others Can Experience And Want To Share Solis described to Tnooz reporter, Sean O’Neil the customer-attracting power of building into the experience, opportunities for gaining bragging rights that spurs people to share it. He had just stayed in a James Bond-themed room in Seven, an Elegancia Hotel in Paris that had, “decor from ’70s era 007 films that was so amazingly convincing that it made me think that Roger Moore would greet me at the door. I shared pictures on all of my social media platforms—Instagram, Facebook, Twitter, Tumblr—and the response was stunning. It was if I had posted a topless pic. It got more than 50 reactions.” Hint: Using apt analytics and direct observation, recognize how to serve your prime kind of customer better than the competition by building in ways they can share their experience with others, from scenic backdrops they want to be photographed in front of, to rituals they can share and tout to handy souvenirs they can customize, share and take with them. Moment by moment pull them closer. In his book Solis describes how businesses can benefit when they design experiences around what he dubs the four moments of truth: inception, tribulation, transformation, and realization. Most anything a company must do can be turned into a shared  -- and shareable – experience. For example, as Vocus’ Jessica Ann, observes, the stars of a Delta safety video are the flight attendant – and the passengers…. seeing the flight attendant not read from a brochure – and instead tell a story.” Passengers become part of what Peter Guber calls a “purposeful narrative” in which they can play a role. To complement this storytelling approach, like a movie director, storyboard the sequence of scenes they experience, from the opening moment to the climactic high point to the closing scene. Ironically, the one often most neglected by companies is the last “scene”, called the “peak end” has the most impact on how we remember experiences. A five-day family vacation that starts with three days at an unexpectedly noisy, “under-remodeling” hotel, yet ends with a sunny day at the beach, is remembered positively. Two Hints: 1. Let them see your genial face and give a momento just before they head out the door or click to something else online. 2. Reduce or eliminate the boring or negative moments and boost the number and impact of the positive touch points. The Rise of Shareable Organizations Multiplies Shared Experiences And Start-Up Opportunities Going beyond the increased need for firms to stand out by crafting shared experiences, some foresaw years ago that organizations were springing up based on sharing. Some are local, community-based, not-for-profits like Quantified Self, scaling globally in participation and innovation, with each Meetup chapter getting smarter as they learn from each other. Their success has fostered a several kinds of ecosytems from clothing to APIs and device-making companies and collaborations like Nike and TechStars’ Nike + Accelerator. At the other end of the sharing continuum are disruptive, scalable for-profit companies, most famously the on-demand car service Uber that’s expanded to include boating, morphing into a lifestyle brand; and Airbnb, already sparking start-up variations, perhaps next, by you.  Tom Friedman officially declared the sharing economy a trend. Looking at the trends from the perspective of “crowd-powered institutions” and corporations, the Altimeter Group, and others dub the trend the Collaborative Economy, with Jeremiah Owyang citing the threats and opportunities for them. Yet years ago, several individuals began covering the mostly grassroots, sharing-based organizations that were sprouting up. They described how some new modes of sharing strengthened neighborliness, and provided fresh opportunities for individuals to collaborate, help each other and/or save or make money. The most touching, for me, are the ways many are surviving economically, in part, by renting out a room in their home, doing tasks or providing rides for others. Neal Gorenflo, for example, has been crowdsourcing coverage of such organizations at Shareable.net. In 2001, On the Commons magazine launched to cover collaborative ways of working. Recently they published the book, All That We Share. Roo Rogers and Rachel Botsman, co-authors of What’s Mine is Yours, continue curating examples of Collaborative Consumption, including a directory of them. In her book, Mesh, and her TED talk, Lisa Gansky describes how businesses can be built to, “provide people with goods and services at the exact moment they need them, without the burden and expense of owning them outright.” Her directory grows, and this year she launched her first global gathering. Sharing-based living and working methods, organizations and experiences will continue to morph, scale and be adopted in more countries and adapted to more situations, causing rippling disruptions to traditional economies as they do. Several are ripe for you to adapt to your own interest, situation, market, profession, or industry.  Are you interested?
4626e713aaf19c81cca1036bf242d380
https://www.forbes.com/sites/kareanderson/2013/08/11/3000/
The Priceless Power Of Socially Empowered Employees
The Priceless Power Of Socially Empowered Employees Companies could capture a big missed opportunity to optimize their employees’ talent to burnish their brand -- and boost esprit de corp. How? By facilitating tighter, smarter teamwork via apt use of social tools. In light of the unsettling Gallup report that, “70% of Americans are unhappy and uninspired at work” this approach should be a wake-up call for top management, suggests Cheryl and Mark Burgess in their new book, The Social Employee, a notion that Dan Pontefract has famously spearheaded at Telus. Yet, as Cheryl Burgess told Vice President of Global Marketing for SAP, Michael Brenner, “The current challenge facing businesses today is this: you can’t communicate externally unless you communicate internally… unfortunately, business culture over the last 30 years (or even longer) has tended to prize cutthroat competitiveness and information hoarding as workers attempted to climb over each other in order to get to the top.”  From Burgess I gleaned these lessons on how other companies successfully supported employees in pulling together more swiftly and strongly using social software, then added some insights that occurred to me: 1. Tap the Wisdom of the Right Crowd First When the top executives at IBM decided to make a major move into social business in 2005, they recognized two things. First, they needed an inside-out approach so employees first learned to engage with each other in coordinated, congruent ways.  Second, that a top-down declaration of how that should be accomplished was inherently opposite of how social works best. So Ethan McCarty, Director of Enterprise Social Strategy, set up a wiki to seek all employees’ input for apt guidelines how they should engage with each other and with clients and others outside of the firm.  Such widespread, apt involvement of employees optimized that brand-building opportunity, rather than stalling it. When making a sweep change in organizational culture, upfront: • Ask employees for input • Make it easy for them to participate • Enable them to see and respond to each others' ideas Seeing each others' ideas often spurs: • Refinement of suggestions • A cascade of better ideas • Greater participation sooner and more • Faster, just-in-time learning • Greater awareness of each other’s talents and opinions • Deeper commitment to a new initiative. • Greater esprit de corps Burgess recounts how many employees began to see the resulting guidelines as their co-created Magna Carta Moment. The bonus benefit for IBM?  As news spread of their initiative, she notes, “When other firms began exploring social computing guidelines for their employees, some turned to IBM.” 2. Optimize Your Employees’ Congruent, Collective Social Participation When Adobe’s leadership chose to establish consistent, company-wide use of social tools to speed growth they recognized they needed employee buy-in. Many workers were already active, adept users of social tools in their work and personal lives. Others, not so much.  And, Burgess told me, “There was no consistency—different work teams used different tools, leading to silos and duplication of effort.” Three things enabled Adobe to harness the full power of their workers, in becoming cross-functionally social employees: • The firm adopted a Hub-and-Spoke model which, according to Burgess, “centralized resources, tools, and policies to ensure that internal and external branding efforts were congruent and fully leveraged across the organization.” • Key stakeholders in every department were recruited to explain how this new model would help, not hamper, employees’ impact, using social tools. • These stakeholders, according to Burgess, were also trained to describe the companywide change by citing how it would benefit them, in the specific situations in which they worked. That crucial part was by Maria Poveromo, Senior Director of Social Media and Public Relations.  Burgess calls this approach, “building a center of excellence around all stakeholders.” 3. Involve Employees in Spurring Social Sharing of Happy Customer Moments When your firm is already famous for fun, often funny and helpful employees, you have a leg up in sharing your brand popularity via social channels.  Southwest Airlines’ leadership realized that they could radically increase the public’s awareness of many happy employee/ customer moments if they supported their employees and customers in spreading the word about them in the social channels they already use. For example, Burgess told me,  “When a flight attendant shared some Taylor Swift guitar picks with a traveling couple—picks she had received from Swift’s father on a previous flight—the couple sent an in-flight message to the airline, recognizing her for being the ‘most remarkably kind flight attendant’ and politely requesting that she be recognized for her efforts. At the gate after the flight, Southwest employees met Holly and the customer with a giant cookie and a sash, and the customers were glad to sign the impromptu contract promising to fly Southwest Airlines exclusively as long as Holly remained employed.“  That story still has legs. The icing on the cake is that such stories are candy to traditional and new media. As we now all know, social employees can spur the spread of stories faster, more credibly and less expensively than paid advertising can, yet few firms fully harness that opportunity. Questions to jump start your firm’s brand burnishing by supporting employees in becoming more adeptly social on behalf of the firm: • How many ways do you support your employees and other stakeholders in shining a light on positive moments and helpful tips related to your organization? • How obvious and easy do you make it for others to pass along that good new, looking good as they do so? How many ways do you recognize and reward those who do? • Does your company train your employees in becoming ambassadors of the company brand, and thus their personal brand, and build in ways they can become more adept, learning from each other?   Does that training include ways to communicate-to-connect, build bonds and be more frequently quoted? • Do you encourage employees to learn fresh ways to capture and share special moments via video, photos, Instagrams, Pinterest and other tools as they emerge? Discover More About The Priceless Power of Socially Empowered Employees For other great customer-centric insights into how to use social engagement to grow your company consider reading Likeable Social Media by Dave Kerpen, Smart Business Social Business by Michael Brito, and Social Business by Design by Dion Hinchcliffe.  Plus peruse a free, idea-packed eBook: Defending and Enhancing Your Brand on Social Media an eBook by Todd Wilms, Head of Social Business Strategy for SAP, and Bryan Kramer, President/CEO at PureMatter.
96bebcb8432d384453ec9cd3797c9f69
https://www.forbes.com/sites/kareanderson/2014/03/17/want-to-crack-the-humor-code/
Want To Crack The Humor Code?
Want To Crack The Humor Code? Want people to laugh with you, or look blankly at you then look away? Dark humor, done right, may be key according to The Humor Code co-authors, Peter McGraw and Joel Warner who travelled the world in search of the answer. Can you suggest an unexpected, silly side of a familiar, embarrassing or even tragic situation? Then you’re evoking the “benign violation” theory of humor, the central premise in their book. They suggest that “humor arises when something seems wrong or threatening, but is simultaneously playful, safe or otherwise benign.” We are likely to laugh at a surprising conclusion. That unexpected twist at the end is also often true in self-deprecating humor.  See these three examples I found: 1.  Emblazoned on the T-shirt of a rotund man coming out of a San Diego beach shop: “The problem with the gene pool is that there is no lifeguard.” 2.  After telling an audience that she’d watched “dog whisperer” Cesar Milan give advice, comedian Paula Poundstone said she learned that “when a dog is sniffing you, he’s gathering information.” She concluded that, “My dog is collecting an extensive dossier on me.” 3. "The time for action is past. Now is the time for senseless bickering," Ashleigh Brilliant  drolly concluded. The Right Kind of Humor Bonds Us in Odd Situations The co-authors of The Humor Code barely knew each other when they decided to travel the world together to discover what made people laugh. Warner is a freelance writer and McGraw directs the Humor Research Lab at the University of Colorado Boulder. It could have felt embarrassing, winding up in a hotel room in Palestine’s West Bank where a transparent glass wall separated the bathroom and bedroom. Instead, Warner told me, “It was easy for us to make cracks about playing ‘guess the body part.’” Adopt a Secret of Successful Stand-Up Comedians Be seen as an intriguing outsider. “There’s a reason why minorities—Jewish-Americans, African-Americans, Muslim-Americans—have long flourished in the stand-up scene. Many of the best comics are outsiders, by circumstance or by choice,” observed Warner.  He added, “Chris Rock, for example, grew up in a working-class section of Brooklyn, but was bussed to predominantly white schools. That made him an outsider in both places, a painful situation for a young kid but a great state of affairs for a future stand-up icon.” Take Your Humor To The Edge Yet Not Over The Top When the co-authors asked people in Sweden and Denmark about the horrific international fall-out from the unattractive, ostensibly funny cartoons of Mohammad published there in 2005 and 2007 they discovered that many still felt the trauma from the life threats and trade boycotts that ensued.  As Nihad Hodzic, deputy head of the Danish organization, Muslims in Dialogue, told the co-authors, for most Muslims, the problem wasn’t Muslim prohibitions against depicting Mohammad, it was how: “It would have had a totally different outcome if this had been a nice painting of Mohammad, I would not have been angry, But this was something that was clearly made to mock.”  With my Danish ancestry I am especially saddened to see that some Danes are still tone deaf as to what messages would offend. Not surprisingly, Warner told me he learned that, “Humor can be dangerous stuff. Cracking jokes has all sorts of beneficial effects, but when those jokes fail, they can have far-reaching consequences – especially today, when a newspaper cartoon can go viral, a quip in an e-mail can be forwarded around the office, a tweet can be heard around the world. Think hard about who the audience is, and most importantly, who’s the butt of the joke. Humor, after all, can be a form of attack – so who’s the target, and do they deserve it? Are you cracking wise to build bonds, lighten the mood, shed light on sensitive topics – or to just be mean?” As you undoubtedly know by now, there are many benefits to getting others to laugh with you including likability, and capacity to dissolve tension or unify a group. Also, “Women want funny guys” and here’s why according research from the Stanford School of Medicine, of all places. You may laugh at the decision to launch their book on April Fool’s Day.
220c8478e0a79570b77f42f967d21add
https://www.forbes.com/sites/kareanderson/2014/10/22/set-the-context-that-fosters-conviviality-connection-and-collaboration/
Set the Context That Fosters Conviviality, Connection and Collaboration
Set the Context That Fosters Conviviality, Connection and Collaboration Want to become a sought-after, connective leader? Then become the glue that bonds others together around their most talented sides. Consider this approach. In experiments, psychologist David Trafimow and colleagues asked half of the study participants to think about how they were different from their friends and family and asked the other half to consider how they were similar to their loved ones. They then asked participants to describe themselves. Those who were asked to think of similarities gave descriptions that included more relationships and roles than those who had thought about their differences. That spurred them to feel closer to each other. To boost bonding among others so they are more apt to work (or play) well together ask them, when together, to do two powerfully simple things that can be done rather quickly: 1. Write down the ways they are like each other. Hint: This creates a level playing field of communication. Writing rather than immediately sharing helps slow thinkers keep up with fast thinkers. Fast thinkers aren’t smarter, just different in their thinking processes, and each kind has advantages and pitfalls, so they can accomplish more together than when a majority in a group think and speak at the same speed. Hint: Salespeople are often fast thinkers. 2.  Share with each other what they wrote, going around the circle, one by one. Bonus Benefit: Other studies show that when you reflect on how you are similar to those with whom you are talking, you pay more attention to them. You care about them more. That spurs the other person to listen more closely to you. What successful ghost writer, Bruce Kasanoff, advises as a path to self-promotion, “without being a jerk,“ is also central to bringing out someone’s better side: “Be generous and expert, trustworthy and clear, open-minded and adaptable, persistent and present.” Tip: Look past what’s “wrong” with others, and instead see what’s special about them in very pragmatic and actionable terms, suggests Kasanoff. “A true leader is not one you look up to because they are the best. A true leader is one that draws the best out in you.” ~ Anne Warfield
c3e739bbfda873cabb3ab3ec5bbf2124
https://www.forbes.com/sites/kareanderson/2015/05/20/saving-corporations-from-de-humanization/
Saving Corporations From De-Humanization
Saving Corporations From De-Humanization This post was co-written by Michael Lee Stallard, author of Connection Culture. Five of the world’s largest banks have pled guilty to an array of antitrust and fraud charges. Is anyone surprised by this news? Companies cutting legal and ethical corners in order to meet earnings expectations has become all too common. Sadly, we have come to distrust most large corporations. Contributing to the negative attitudes many people today have are the unfavorable experiences they may have had if they worked in a large organization. Gallup research shows that over the last decade around 70 percent of employees report they were not engaged at work. Culture is at the root of the problem. When we use the term “culture” we are referring to the predominant attitudes, language and behavior of individuals in a group. In most companies today, the focus of attitudes, language and behavior is on task excellence and achieving results. This is necessary, but insufficient. Unless an organization’s culture also focuses on people and developing relationship excellence, the organization will experience managerial failure and decline. This is a major reason why Fortune 500 companies have a lifespan of fewer than 50 years. It’s also why there is a backlash against globalization. People fear task and results-driven organizations that are indifferent to the human costs of their actions. These fears are not misplaced. A world comprised of such corporations stands on feet of clay that one day will crumble. Don't get us wrong. The leaders who are moving the world in this direction are not necessarily bad people. The problem is one of focus. Leadership Myopia When leaders focus on task excellence and results alone, de-humanization naturally occurs. We stop thinking about serving the customer and focus instead on hitting our numbers in order to get our bonuses. We stop caring about the people who work in our companies unless they help us in return. Relationships become quid pro quo and people become means to an end rather than human beings worthy of respect and basic human rights. It becomes easier to prioritize profits over people. Corners are cut here and there to meet quarterly earnings expectations. When corners are cut, people pay the price. When interest rates or securities prices are rigged in order to hit revenue and profit goals, investors’ savings are short-changed. When drug research is compromised to get regulatory approval, patients suffer. When medical benefits are cut to reduce costs, the health of employees and their family members decline. When business is shifted to suppliers who employ inhumane labor practices, human rights are violated. What’s out of sight is out of mind. Cultures of Human Connection We need leaders who create a “connection culture” in their companies. In a connection culture people care about others and they care about their work because it benefits people.  They invest time to develop healthy relationships within the organization, with customers and suppliers, and they reach out to help others in need rather than being indifferent to them.  In this type of culture there is a sense of connection, community and unity that makes people feel included and energized, and spurs productivity and innovation. Cultures that are low on human connection come in two types: “cultures of control” and “cultures of indifference.” (Take the nine-question Culture Quiz to see which culture you work in.) In a culture of control, people with power, influence and status rule over others. As a result, others feel left out and they fear to make mistakes or take risks. A culture of control is stifling. It kills innovation because most people are afraid to speak up.  Working in this type of culture, you may feel micro-managed, unsafe, hyper-criticized and/or helpless. In a culture of indifference, people are so busy they fail to invest the time necessary to develop healthy, supportive relationships. With the push for results, cultures of indifference are predominant today, especially if leaders don’t see value in the relational nature of work. As a result, many people struggle with loneliness.  Working in this type of culture, you may feel like a cog in a machine, unimportant, uncertain and/or invisible. Feeling consistently unsupported, left out or lonely takes its toll.  As a result, people lack the psychological resources to cope with the normal stress of modern organizational life and turn to unhealthy attitudes and behaviors, many of which are addictive and destructive.  Both cultures of control and cultures of indifference sabotage individual and organizational performance. Three Core Elements of a Connection Culture In addition to focusing on task excellence and results, organizations need three core elements in a connection culture: One: Vision What we describe as “inspiring identity” exists in a culture when everyone is motivated by the mission, united by the values, and proud of the reputation. When people share a purpose or set of beliefs it unites and motivates them. Vision produces shared identity. Two: Value “Human value, ” means that people are truly valued as individuals, not merely for what they produce.  Value exists in a culture when everyone in an organization understands universal human needs, appreciates the unique contribution of each person, and helps them achieve their potential. Value produces shared empathy. Three: Voice This is “knowledge flow” which happens when everyone in an organization is encouraged to seek the ideas and opinions of others, share their opinions honestly and safeguard relational connections.  In a culture where Voice exists, decision-makers have the humility to know that they don't have a monopoly on good ideas, and they need to seek and consider the opinions and ideas of others in order to make the best decisions. When people’s ideas and opinions are sought and considered, it helps meet the human needs for respect, recognition and belonging.  “Being in the loop,” so to speak, makes people feel connected to their colleagues; being “out of the loop” makes people feel disconnected. Voice produces shared understanding. Diverse leaders have fostered connection cultures, including Bono of the rock band U2, Ed Catmull of Pixar and Disney Animation, Coach K of the Duke Men’s Basketball Team, Frances Hesselbein of the Girl Scouts, Ratan Tata of Tata, and CNO Admiral Vern Clark of the U.S. Navy, as Stallard discovered when writing Connection Culture . Connection Cultures Enable Companies to Become the Most Trusted Option Michael Stallard first met Chuck Schwab over lunch in his office, shortly after becoming the chief marketing officer of a company Schwab acquired that was based in New York City. Schwab told Stallard that the Charles Schwab Corporation had a culture that was different than what he had experienced on Wall Street. Chuck wasn’t kidding. During the meeting they discussed an advertising campaign created by Stallard’s predecessor and Chuck asked if employees were proud of it, to which he replied that they were not. Schwab said we needed to replace the agency that created the campaign. It was the first conversation Stallard had about advertising in which a CEO asked what employees thought. Later Stallard learned that The Charles Schwab Corporation’s mission was “to create the most useful and ethical financial products in the world.” Many large financial services firms focus on their own reputation, desiring to be known as the “most respected financial services firm in the world.” Schwab focuses on serving people not achieving greater status. Chuck Schwab thus demonstrated that he valued his people. After the market downturn following September 11, Schwab had to reduce its number of employees to stem the red ink. This was gut-wrenching for Chuck. Employees who lost their jobs received generous severance pay, including access to funds for education and stock options. Has any other company granted employees stock options on their way out? Chuck and his wife personally contributed $10 million to help cover the cost. Chuck gave people a voice, too. He was approachable. He traveled to meet Schwab employees, provide updates about the company, and hear and consider their opinions and ideas. He was generous about sharing information with all employees. He trusted them. (Most Wall Street leaders don't trust employees enough to share strategic information and want them to just focus on the work in front of them). After interviewing and surveying more than four thousand people in diverse sectors Stallard came to believe that, to restore trust in corporate America we need more leaders like Chuck Schwab who are willing to create a connection culture that boosts integrity and trust inside and outside their organization.
9fb28ac2bb675b2696a64d8336cc33b8
https://www.forbes.com/sites/karenclarkcole/2016/05/31/how-user-experience-is-revolutionizing-business/
How User Experience Is Revolutionizing Business
How User Experience Is Revolutionizing Business Kristina loves the user experience of her Starbucks coffee app and watches with delight as gold... [+] stars fill her cup. (Photo by Karen Clark Cole) Picked up wherever I am, dropped off wherever I want to go. And when I arrive, I bounce out of the car and wave goodbye. My ride feels like it was free because there was no money exchanged nor credit card pulled out. The whole transaction was handled by a slick, pretty pink app on my phone. What am I talking about? Lyft right? Yes, but more importantly I’m talking about UX, the user experience. UX in fact made this line of business possible by differentiating these ride-share startups from traditional taxi cab and radio car companies. But it’s not just new, disruptive companies for which UX is fundamental, UX is revolutionizing pretty much every business today. In fact, UX is fast becoming the defining strategy of every company. UX got its humble start in the late 1990s when companies gunning for market share realized that a friendlier web interface gave them an edge over the competition. The bar was pretty low at the start; people would regularly send me websites accompanied by a note stating, ‘You need to fix this.’ But today, I can say that UX is a business requirement with the same certainty that I can say that climate change is real. No company can become or remain a leader if it doesn’t prioritize and put money behind creating a user experience that meets its customers’ needs, and more importantly, customer expectations . Creating a great website back in the 90’s was more a stroke of luck by the new class of web designers rather than the elegant science it has become today, which melds business and customer needs in an intuitive way. Take ordering and paying for my double tall Frappuccino with light caramel drizzle and non-fat milk on my phone at Starbucks (I actually drink drip, but I like this idea). It’s not a stroke of luck that Starbucks made a world’s leading app. They started from the vision of extending their brand of friendly ‘inspiration dispensing’ baristas. The design team spent months conducting user research, and then iterated on prototypes with extensive usability testing before launching. They created an app that does exactly what a well-designed user experience should – stay out of my way. Subscribe Now: Forbes Entrepreneurs & Small Business Newsletters All the trials and triumphs of building a business – delivered to your inbox. But make no mistake. UX isn’t just the latest fad – it’s revolutionizing business. All business including government, healthcare, and even the finance industries. In the last year, two major banks made the commitment to not only focus on UX to make their digital products better, but to transform the entire financial industry by prioritizing UX end-to-end. When the Spanish Bank BBVA acquired a San Francisco UX firm, it said ‘Human-centered design is key to disruption in the finance industry and BBVA is developing the best end-to-end user experience across all channels.’ And it’s even changing manufacturing. ‘Digital offers a fundamental transformation of our business, as we create an industry-leading digital ecosystem that flows through all that we do.’ You’ve heard clichés like this before, right? Probably from some business whose stock and trade is information or media or some other virtual product. But what if I told this line was written by a shoe company? And not just any shoe company, but Nike, the powerhouse of athletic wear, which it used to announce the hiring of it’s first ever Chief Digital Officer.  A shoe company that has realized the future of its business means more than just letting people order an eleven and half EEE free-running shoe with cobalt blue stripes. It means providing customers with a seamless, efficient and compelling experience; from designing to buying to sharing the experience of their shoes with other fans. UX. From start to finish. Even big data organizations like NASA have opportunities to innovate with UX. Scientists today download terabytes of free data weekly from NASA satellites. The problem is, these scientists spend upwards of 80% of their time parsing and segmenting the raw data, essentially turning it into information before they can use it for analysis. In theory, if NASA could give these scientists the data in a more usable and useful format, advances in science and our understanding of earth and space could potentially happen faster. The opportunities for great digital experiences are endless, but still today technology is too often a source of anxiety and frustration, even fear . What it should be doing is enriching our lives the way that Lyft and Starbucks do, by putting money into great design and meeting us where we are, getting us where we want to go and making technology human. Now that’s great UX. Uniting families by making technology human. My 90-year-old Grandma connects daily with her 7 year... [+] old great-granddaughter living in another country, thanks to the great UX that Apple and Skype worked hard to create. (Photo by Karen Clark Cole) Please send me topic ideas, business or UX questions on Twitter @karenclarkcole Save
2c50a0bd4376bd8dbd6515a917eee992
https://www.forbes.com/sites/karenhigginbottom/2014/06/24/how-to-support-transgender-employees-in-the-workplace/
How To Support Transgender Employees In The Workplace
How To Support Transgender Employees In The Workplace Many banks such as Goldman Sachs and Barclays have put a lot of effort into developing a culture that supports gay and lesbian employees. But one section of the LGBT (lesbian, gay, bisexual and transgender) community which is often ignored are transgender employees.  What are the barriers facing transgender employees in the workplace and what are financial services firms doing to support transgender employees in the workplace? Karen Higginbottom investigates…. A transgender or trans employee is an employee whose gender identity or gender expression differs from the gender assumptions made about them when they were born, according to a Stonewall Scotland guide “Changing for the Better: How to include trans people in your workplace”. Some transgender employees will have just started to undergo gender re-assignment (transition) to change the gender role in which they live to better reflect their gender identity; others will already have completed their gender re-assignment (transition) and are simply men or women who have trans histories. Trans employees are still reluctant to come ‘out’ in the workplace, according to findings from a report ‘LGBT Diversity: Show me the business case’ to be published in full in September 2014 by Out Now, an LGBT marketing and research agency.  The report found that 53 % of trans individuals are not ‘out’ to anyone at work while 35 % of trans individuals were ‘out’ to everyone at work. There are many challenges facing transgender employees in the workplace, commented Ian Johnson, founder and chief executive of Out Now. “The first and perhaps most obvious is whether to reveal one’s trans status at work.” The 2012 sample for the Out Now Global LGBT 2020 study revealed that trans people feared that coming ‘out’ would hurt their career prospects, reported Johnson. “Almost one in two (46 %) trans respondents feel that coming out will definitely or probably hurt their career prospects.  Our data also shows that trans people are preferring where they can, to look for employers that have a well-promoted and implemented LGBT policy.  There is often a disconnect between policies and practice on the shopfloor and only the latter can truly improve working conditions for LGBT people.” Indeed, the 2012 study revealed that 51 % of trans people wouldn’t work for an employer without a LGBT staff policy in place. Unlike many transgender employees, Leslie’s* experience of coming out as transgender female in the workplace has been a positive experience. She is a senior partner at an accountancy firm in London and started living as a full-time female in May 2013.  “I went to the managing partner in August 2012 and I met with each of the equity partners on a one-to-one basis to tell them,” she recalled. “I felt they needed to understand the background and the timetable and it also gave me an opportunity to re-locate if they felt uncomfortable.”  The firm have been incredibly supportive and open-minded, added Leslie. “I told senior management less than two years ago that I was ‘trans’ and they have accepted me in a non-judgmental way. ” Belonging to a networking group like Outstanding in Business has also been very helpful, remarked Leslie. “It’s allowed me to meet like-minded people in similar positions.” So what can employers and specifically the financial services sector do to support transgender employees in the workplace? One of the most important things that can be done is to make LGBT policies visible and visibly supported to all staff, advised Johnson. “The financial services sector has in many ways traditionally had a laddish culture which can be a tough environment for LGBT and especially trans people to do well in.” Management sponsorship is one very effective way of showing staff that a bank genuinely values the important of LGBT staff feeling properly supported at work, commented Johnson. “The higher the manager that is engaged on championing the issue, the more effective can be the results of instigating real corporate change within the firms in the financial services industry,” he said.  (See Bank of America case study) Regular corporate benchmark audits are also important to effect change in the organizational’s culture, added Johnson. “We’re working with a number of clients to compare their internal corporate results on the LGBT 2020 metrics with the national averages for the broad range of areas tested. It’s important to measure, benchmark and then to set improvement as a stated corporate objective if real progress is to be achieved on these issues.” Bank of America Merrill Lynch (BofAML) case study: For the financial services giant, management sponsorship of transgender issues is critical, explained Lauren Saunders, head of diversity and inclusion, EMEA for BofAML. “Management’s role in supporting transgender employees through their transition is critical, as is senior leaders’ engagement in ensuring an organization has an open and inclusive environment where employees feel able to be themselves at work.  Without the sponsorship of our employee networks and engagement on initiatives that support our LGBT colleagues such as our Global Ally program, we face a huge risk of losing or not obtaining our industry’s top LGBT talent.” Supporting and educating managers in transgender issues is done through offering online ‘Trans 101’ training, where managers can hear from some of the bank’s transgender employees about their personal experience of being transgender.   This is offered on a quarterly basis, explained Saunders.  “They also share first-hand how they would like to discuss their transition and the questions they would be happy to answer and what experience they would like to have.” When an individual comes ‘out’ as transgender they can go to HR or their line manager, commented Saunders. “Some individuals approach HR to understand the practicalities of their transition, their legal rights and the policies in place. Others choose to use our Employee Assistance Program which offers specific counselling and a forum for confidential discussions.  For example, a counsellor can advise them on how to approach their line manager. There is no fixed route –the process is very personal for each employee.” When an employee is undergoing gender re-assignment surgery, HR and the employee’s manager, together with the employee will discuss the best approach. “The process and surgery can take a lengthy time and HR supports employees through discussions about time needed off for work, time off for surgery and then supports the transgender employee’s return to work and integration back into the workplace in their new gender," said  Saunders.  "It’s been our experience that most individuals are transitioning into their new gender prior to gender re-assignment surgery." *Names have be changed to protect identities.
1696f362e04087f292a0dc410b9232d6
https://www.forbes.com/sites/karenhigginbottom/2014/09/24/ageism-and-sexism-still-rampant-in-the-city-and-wall-street/
Ageism And Sexism Still Rampant In The City And Wall Street
Ageism And Sexism Still Rampant In The City And Wall Street Age discrimination in the financial services is nearly as big an issue as gender discrimination, according to a survey of finance professionals in both the US and UK by eFinancialCareers. Of the nearly 900 US-based respondents, 82% said they worked in a diverse environment yet half of those respondents had experienced or witnessed some form of discrimination. The eFinancialCareers diversity survey reported that the most common types of discrimination were age and gender, followed by race.  Perceived age discrimination was particularly prevalent on Wall Street.  Of those aged 50 years and older, 60% have felt discouraged from applying to a position on the grounds of their age and seven in 10 would be ‘very concerned’ their age would be a barrier in finding a new job, compared with just one in 10 of their colleagues aged 40 and younger.  The 40-year mark proved significant, according to the report. Fear of job security more than triples from 9% to those aged under 40 to 32% for those aged from 41 to 50. Gender discrimination on Wall Street is still omnipresent. Only five in 10 bankers surveyed believe that women are equally represented at senior levels at their current employer.  On a discouraging note, more than half the respondents believe the situation at senior levels will remain the same. Nearly a third of women surveyed reported being discriminated against because of their gender compared with just 8% of men. The survey revealed that gender discrimination was felt in both pay and career progression on Wall Street: 61% of all respondents said that men were paid more than women in equivalent financial services roles.  Yet nearly two-thirds of women believed that men were better at negotiating a raise or promotion. When the US survey respondents were asked what companies could do to correct these figures, they cited practices such as more flexible working arrangements and mentoring programs, followed by cultural change. Meanwhile in the UK, age discrimination is nearly as common as gender discrimination in the financial services industry. The eFinancialCareers survey found that a quarter of all respondents report they have experienced or witnessed discrimination against age at their current employer.  This compares to 29% for gender discrimination. Of those aged between 41 and 50 years, 33% have felt discouraged from applying to a position on the grounds of their age.  This rises to 55% among those aged over 51 and nearly six in 10 would be ‘very concerned’ their age would be an obstacle in finding a new job.  Overall, more than two-thirds of finance professionals agree that older employees should be protected from age discrimination just like they are protected from discrimination on the basis of ethnic background, gender or religion. However, gender discrimination is still ever present but the gender income gap appears to have decreased.  Nearly three-quarters of UK-based finance professionals believe gender discrimination exists in financial services but this increased to 86% among female respondents, of whom nearly a quarter said they had experienced it personally. Only 43% of respondents believe that women are equally represented at senior levels at their current employer.  Even more disheartening is that the 56% believes the situation will remain the same.  The lack of optimism is not surprising considering that 76% of respondents say their company doesn’t have explicit gender diversity targets. “Financial services need to shed the image of being a bastion of male dominance, so it’s essential for firms to have clear, established gender diversity policies in place to become a more attractive proposition for women to work in,” said  James Bennett, managing director for eFinancialCareers.
d0552ac2a9e0c4b0559793a256c2f7e1
https://www.forbes.com/sites/karenhigginbottom/2014/10/02/more-women-on-the-board-means-higher-returns-for-firms/
More Women On The Board Means Higher Returns For Firms
More Women On The Board Means Higher Returns For Firms Companies with higher female participation at board level exhibit higher returns and payout ratios, according to a report by Credit Suisse Research Institute. The CS Gender 3000: Women in Senior Management report tracked the gender mix among key senior management roles across company, industry and country lines drawn from more than3000 companies and 28,000 senior managers across 40 countries and all major sectors. The findings revealed that female participation in top management roles such as chief executive (CEO) and directors reporting to the CEO stood at 12.9% at the end of 2013.  Just 4% of CEOs in the report are female.  Since the start of 2012, there has been a 5% out-performance on a sector neutral basis by those companies with at least one woman on the board.  A longer trend analysis showed a compound annual excess return since 2005 of 3.7%. The report evaluated the average financial metrics and found: Higher return on equity (ROE): Since 2005, the average sector-adjusted ROE of companies with at least one female member since 2005 has been 14.1% compared to 11.2% for those with zero representation.  When looking at top management and adjusting for any industry bias, companies with more than 15% of women in top management had a 2013 ROE of 14.7% compared to 9.7% for those where women represent less than 10% of the top management. Higher payout ratio: Companies that have at least on woman on the Board of Directors have seen an average payout ratio of 38% since 2005 vs 32% at companies with no female directors. There was less convincing evidence that women run more conservative business models: Companies with less than 10% of women in top management showed at the end of 2013 a net debt to equity ratio of 35% versus 57% for companies with more than 15% of women in top management.  There was no difference though when the report looked at board representation: average data since 2005 showed that net debt/equity where there is female board representation has been 47.7% compared to 47.5% for male-only boards. On the surface, the news appears good for gender diversity, according to the CS Gender 3000 survey.  Board diversity has increased in almost every country and every sector, progressing from 9.6% to nearly 12.7% at the end of 2013.  However, more detailed analysis of the data revealed disappointing figures for women in top management positions. The research looked at companies comprising the FTSE100 and Standard & Poor (S&P) 500 indices and found that male CEOs outweigh females by 20 to one and UK male executive directors outnumbered female executive directors by 10 to one.  Although the overall representation of women in senior management positions is comparable with that of the board data, there was a notable contrast in terms of the nature of the responsibilities held by the women. The report revealed that worryingly, women in management were concentrated in roles which were not associated with influence and power such as shared services and staff functions.  In all the regions, women had significantly greater representation in shared services rather than CEO or operational roles.  These positions carry less influence and typically have less profit and loss responsibility.  The report cited Bloomberg data that 94% of S&P 500 CEOs held top operations positions immediately before ascending to the top job and warned that the relative scarcity of women overseeing product lines or entire business units risked slowing their advance to the very top.
bb0ca0d69f9cd4ee3bf8f23af66155b3
https://www.forbes.com/sites/karenhigginbottom/2018/10/08/a-lack-of-connection-in-a-digital-world/
A Lack Of Connection In A Digital World
A Lack Of Connection In A Digital World Despite the avalanche of digital connections, we’re not making the connections that matter, according to research by O.C Tanner. Its 2018 Global Culture report shows that 42% of respondents don’t have a close friend at work. Not surprisingly then, 46% of respondents reported feeling lonely. Gary Beckstrand, vice-president at O.C Tanner, argues that when workplaces are built with intentionality and enable peer interaction, employees are 84% more likely to have a close friend at work, 92% more likely to feel optimistic about the future and 42% more likely to trust their team members. It also has an impact on their well-being – there is a 28% increase in an employees’ sense of well-being. The level of isolation experienced by generations varies, according to O.C Tanner research: IGen (45%) and Millennials (35%) feel considerably more isolated than Baby Boomers (24%) and Gen X (25%). Beckstrand remarked that connection is only authentic, if people know the real person. According to O.C Tanner’s research, only 58% of employees say their team knows the “real me” and 58% have a close friend at work. This one was a bit startling – we have asked this before in a slightly different way – “best friend” at work. I anticipated we would see a much higher percentage. I guess this is not as surprising as one would think, given that only 62% report that they take time to get their colleagues personally and only 53% report that colleagues take time to get to know them personally.” A culture where people feel disconnected at work doesn’t just hurt employees. The Center for Prevention and Health estimates mental illness and substance abuse caused by poor emotional well-being costs employers $79-$105 billion every year through reduced productivity, absenteeism and increased healthcare costs. Work is a social event, reflect Stuart Duff, partner at Pearn Kandola. It’s a social coming together and we neglect to think about this. We did a lot of research with Cisco where we looked at what happened when the organization becomes dispersed and fragmented. Who are the people who are more successful at working in a dispersed environment? We found that the people who are better at working away from the organization are much more extrovert. This is because extroverts seek much more social stimulation to talk to people whereas introverts struggle to work away. When you’re more reserved, you find yourself days on end without talking to people.” In its 2018 global culture report, O.C Tanner interviewed focus groups to get their thoughts on the importance of connection in the workplace. There were three main themes identified among the focus groups when it came to the importance of connection in the workplace. 1. People want to connect with their “second family.” Employees are working harder and spending more time in the workplace. Most employees spend more time at work than they do at home with their families on a given day. Connections and relationships at work create a more positive work experience. That’s important when people work 8-12 hours a day, five days a week. 2. Connections promote a sense of belonging. Humans are social creatures. We innately want to interact with other people. Employees want to fit in and belong in their organization. They want to feel part of a team. A lack of connection leads to loneliness and uncertainty of whether or not the employee belongs at the organization. 3. Connections create a feeling of making a difference. We hear over and over again that employees aren’t there just for the job. They want to feel a part of something bigger, contribute to a meaningful purpose, make a difference in the world. They want to strive for a common goal together, tackle challenges together, and find success
315068123f7defcc2059c46553134452
https://www.forbes.com/sites/karenhua/2015/08/13/where-millennials-make-friends-and-mobilize-for-change/?sh=555e3a5a4a8f
Where Millennials Make Friends And Mobilize For Change
Where Millennials Make Friends And Mobilize For Change Add “making friends” to the growing roster of reasons why teens can’t seem to take their eyes off their screens. A new study, Teens, Technology and Friendships, released last week from Pew Research Center shows that 57% of teens (ages 13-17) have met at least one new friend online, with nearly 30% making five or more pals. Girls meet people primarily through social media, while boys tend to make acquaintances through gaming or eSports. While the words “cyberbullying” and “online predator” are still chilling, this study reveals that today’s teenagers are forming deep, personal connections and relationships online. “Adults in our society and parents have this idea that a lot of things teens are doing and video games are frivolous, a waste of time,” says Amanda Lenhart, author of the study and associate director for research at Pew. “But these digital platforms are now incredibly important parts of how teens form meaningful relationships.” According to her findings, 72% of all teens spend time with friends on social media regularly. This behavior continues on college campuses across the country, and many online interactions have mobilized millennials into real-life communities. In fact, one in five teens have met an online friend in person; what would have been seen as a dangerous act years ago is now increasingly commonplace. From online to ‘IRL’ The Harry Potter Alliance has done just that. What started as a collective, fervent passion for J.K. Rowling’s franchise has transitioned from online to IRL (in real life) – inciting dozens of college chapters across the country. Founded by Andrew Slack in 2005, the non-profit company advocates for literacy and social awareness. Through the annual Accio Books campaign, chapters have donated almost 200,000 books to global disaster areas and local underprivileged communities. HPA’s Odds in Our Favor campaign uses the popular Hunger Games franchise to promote economic inequality awareness. Janae Phillips stepped up from her role as chapter organizer at the University of Arizona in 2014 to being the current national chapter director. She studied online communities as an ed-tech master’s student. Phillips describes HPA as a creative approach to activism, where members anywhere can reach out online for leadership advice and camaraderie, then mirror that back into their real life chapters. “The reason why we’re able to develop (HPA’s) community is because we have this shared interest and passion and fandom,” she says. “It’s a symbiotic relationship.” “It’s unfortunate that the Internet has been around so long, yet is seen as so distinct,” Phillips continues. “The online space to (millennials) is just part of their daily lives. The connections to real life are offline, too.” “The more prevalent something is online, the more prevalent it will be in real life,” says Jessica Kotnour, a Kenyon College-bound student and HPA regional liaison. “We use Twitter to bring smaller causes to attention, and once you’re aware of something online, you can make that change in real life.” By using mainstream pop culture coupled with the megaphone of social media, the organization has a much wider reach to garner more support. Melissa Anelli, HPA board president and co-owner of Mischief Management, an organization that bridges online fandom to real-life connections, emphasizes how her community sprouted from the liberal and activist morals of the Harry Potter books and other cult franchises. “The quality of a fandom with a shared interest goes back to the source material,” she says. “The HPA is ‘Dumbledore’s Army’ in the real world.” Similarly, John and Hank Green’s Nerdfighter community has spread from online fan groups (for John Green books and all things nerdy) to hundreds of global campuses that unite for community service. At the University of Michigan, avid online fans players of the League of Legends and Super Smash Bros. games have constructed real-life groups, as well. More than just a competitive team against other online players, Michigan’s League of Legends club emphasizes community, says club president and rising computer science senior Patrick Huang. He defines his club simply as a way to meet people with a common interest in eSports – and the online component only strengthens their bond. Huang met his first online friend at age 13, and after a series of video calls to verify authenticity and security, they’ve now met multiple times and are still friends to this day. He also met his current college roommate as an online gaming competitor, too, until they met and clicked in person at school. “People nowadays spend a lot of time on Netflix,” Huang explains as an example. “But playing (video) games with friends is given a negative connotation, when it’s actually even more social.”
a23d8b5b56a7d180f96cce90ddb5a8ad
https://www.forbes.com/sites/karenhua/2016/07/13/top-earning-model-sean-opry-nyfw-taylor-swift-modeling/
Top-Earning Model Sean O'Pry on Fashion Week and Life After Modeling
Top-Earning Model Sean O'Pry on Fashion Week and Life After Modeling For a decade now, Sean O’Pry’s chiseled features and striking blue eyes have stared back from billboards, subways, and almost every notable magazine. In 2013, FORBES named him the world’s highest-paid model. He may be best known as the handsome, ill-fated suitor from Taylor Swift’s Blank Space music video, but he aims to create a name for himself that doesn’t need a celebrity face alongside for recognition. After 10 years in the modeling industry, O’Pry has experienced his fair share of international fashion weeks including opening for Versace, Givenchy, and Balmain – and once even walking in 18 shows one season in Milan. Now he prefers brand campaigns over runways, his resume boasting Calvin Klein, Ralph Lauren , Giorgio Armani , Dolce & Gabbana, and Marc Jacobs, among others. O’Pry, who celebrated his 27th birthday last week, is considered a runway veteran at his age. In fact, he laughs about feeling too old for his industry: “Have you seen some of these new guys?” Quite fittingly, this season at New York Fashion Week: Men’s, O’Pry walked exclusively for a designer who relies on maturity to uphold the cosmopolitan aesthetic of his clothes. Joseph Abboud’s Spring/Summer ’17 show was his second in 15 years (after reclaiming his namesake label in 2013). The collection, inspired by the adventurous spirit of Ernest Hemingway, also featured older male models between their late 20s and mid-40s. “My whole career has been trying to make boys look like men. We want them to look sexy and appealing, but we don’t want them to be too boyish,” Abboud said. On the unconventional runway, set in a garden café with tropical trees and string lights, O’Pry entered in a pristine white three-piece linen suit that gave him an air of sophistication. A beige and ivory striped linen scarf lined the inside of his vest, and khaki print loafers neatly tied the look together. “The idea really is to show men how to use summer fabrics,” Abboud explained of his linen-centric collection. “As guys get older and mature, they have more character.” Even with a decade of industry wisdom, O’Pry admitted that he maintains the same pre-show routine: “I just try to pee before because I get nervous.” Over 10 years, he believes social media has shaped the modeling business the most. “Before, everyone saw your book and just requested to book you. Now they just go on your Instagram.” O’Pry himself has 570,000 Instagram followers and 94,000 on Twitter. The numbers aren’t insignificant, but they’re nowhere near the 2.4 million followers decade-younger male models like Lucky Blue Smith attract. O’Pry, who earned 1.5 million in 2013, believes that there may be an age “too old” for the business. For the sizeable salary O’Pry has earned over his career, he offered three key words of advice: “Invest it well.” “I collect cars and watches. I’ve invested in properties and businesses. I’ve been smart with my money since I was young,” he elaborated. O’Pry’s car collection, which includes a 1970 Chevelle, is housed back in Georgia. One of his luxurious hobbies includes restoring cars, which he developed a fascination with eight years ago. “I’d always wanted an old car, so I looked on Craigslist, found one, drove all the way to South Carolina – it broke down four times. ’73 Bronco,” he laughed. Thank you immensely @ferrariusa for having me a part of your rally to Carmel. @thefacinator and I won the race that didn't exist, and the Trophy is gorgeous. (It doesn't exist as well) but we still arrived first, so there's that. @visitcalifornia #MyCaliforniaT #DreamBig A photo posted by Sean O'Pry (@seanopry55) on May 17, 2016 at 8:47pm PDT For the future, he confessed that he would love to be on a lip sync show, but he refused to sing while in a dressing room of other men. Besides modeling, O’Pry has been keeping up his acting classes as a potential next step. He’s been a pretty face alongside Madonna and Katy Perry, but after Taylor Swift’s Blank Space, he jokes that his life ambition is for people to “stop asking me fucking Taylor Swift questions.” “I just want to be well-respected as an actor, not to just be ‘that guy,’” O’Pry said. O’Pry will soon appear in a small role in the season finale of Veep and in the upcoming film XOXO.