id
stringlengths
32
32
url
stringlengths
31
1.58k
title
stringlengths
0
1.02k
contents
stringlengths
92
1.17M
fe6d4640819a2d2c353e433acc2e65ed
https://www.forbes.com/sites/jvchamary/2020/05/29/dementia-gene-coronavirus-risk/?sh=477db240d6ad
A ‘Dementia Gene’ Doubles Your Risk Of Severe Covid-19
A ‘Dementia Gene’ Doubles Your Risk Of Severe Covid-19 Elderly ill woman and healthy caregiver wearing face masks to protect against coronavirus. Getty People aren't equally susceptible to coronavirus disease. We know relatively little about which factors influence the severity of symptoms you would experience if infected by the SARS-CoV-2 virus. The elderly are especially vulnerable, for example, but the reasons why are a lot more complex than a weaker immune system. Medical researchers have now identified one factor that influences your risk of severe disease: a variant of the gene 'APOE', which is associated with dementia. You normally inherit two copies of this 'dementia gene' (one from each parent), which comes in variants that include e3 and e4. Among individuals of European ancestry, the most common APOE genotype is two copies of the e3 variant, e3/e3, which isn't linked to dementia. But in around 1 in 36 'Europeans', both copies are e4 (you have the letter 'C' instead of 'T' in your DNA) and the gene produces a faulty APOE protein. This variant is a common genetic factor associated with late-onset Alzheimer's disease. According to the new study, people who carry APOE e4/e4 have twice the risk of developing severe Covid-19 (when compared to those with e3/e3), as discovered after scanning the UK Biobank, a database containing DNA sequences from half a million volunteers. I understand more than most about APOE after having my whole genome sequenced. Reading every letter in my DNA revealed that I have one copy of each APOE variant (my immediate ancestors originated from Asia via Africa). MORE FOR YOUA 10 Billion-Year-Old ‘Super-Earth’ Has Been Found In Our Galaxy That Suggests Ancient Lifeforms Are PossibleThis College Professor Became An Overnight Billionaire Fighting CovidThe Dream Of String Theory Is An Unlikely Broken Box Note that if, like me, you carry the e3/e4 combination of APOE variants, it's unlikely you'll be at higher risk of severe Covid-19 because a working gene can often make enough protein to compensate for the other, faulty variant. By analogy, missing one functional gene isn't like a car you can't drive because it's missing a wheel, but like a broken headlamp — the working light will still let you see the road in the dark. The new analysis was published in the Journal of Gerontology: Medical Sciences and carried out by researchers at the University of Exeter in the UK and the University of Connecticut in the US. The two teams previously showed that those with dementia are three times more likely to suffer from severe Covid-19. That result could potentially be explained by older people having an increased chance of exposure to the virus in care homes, however. The new research suggests a genetic factor is also involved. Of the 382,188 participants in the BioBank with European ancestors, 9022 (2.4%) carry the faulty e4/e4 genotype. And of the 721 individuals who tested positive for coronavirus, 37 (5.1%) also have those genetic variants of ApoE. APOE stands for 'apolipoprotein E' and, as the lipid part of its name implies, the gene produces a protein that enables our bodies to transport cholesterol and other kinds of fat through the bloodstream — which is why variants of this so-called 'dementia gene' are also associated with cardiovascular conditions. This highlights how faulty APOE genes can have knock-on effects beyond the blood and brain — including the body's response to Covid-19. The study was led by epidemiologist David Melzer, who emphasised that the greater vulnerability to severe disease isn't just down to ageing — genetic predisposition also plays a part. "This high risk may not simply be due to the effects of dementia, advancing age or frailty, or exposure to the virus in care homes. The effect could be partly due to this underlying genetic change, which puts them at risk for both Covid-19 and dementia." Full coverage and live updates on the Coronavirus
9e3b12814a74a2e94d141100edacd41b
https://www.forbes.com/sites/jvchamary/2020/06/29/light-coronavirus/
Light Kills Coronavirus. Here’s How It Could Help Beat Covid-19
Light Kills Coronavirus. Here’s How It Could Help Beat Covid-19 Young surfer wearing sunglasses and mask on a beach. Getty Sunshine is vital to your health. Most famously, skin cells use the energy in solar radiation to make Vitamin D, which is what enables bones to absorb calcium, helps fight depression and lowers your risk of cancer and cardiovascular conditions. Light also provides protection against disease, including Covid-19. Virologists at the National Institute of Allergy and Infectious Diseases found that, inside the laboratory, the SARS-CoV-2 coronavirus remains stable on everyday materials like metals, plastic and cardboard for several days. But it wouldn't last anywhere near as long outside. When researchers at the National Biodefense Analysis and Countermeasures Center exposed SARS-CoV-2 in simulated saliva to artificial sunlight (equivalent to a sunny day), 90% of viruses were inactivated within seven minutes. This result suggests that Coronavirus is less able to survive under the Sun's rays and that your risk of exposure is significantly lower in outdoor environments. As the researchers conclude, "these data indicate that natural sunlight may be effective as a disinfectant for contaminated nonporous materials." Sunlight could not only serve to sterilize surfaces and prevent infection, it's also a potential treatment via 'phototherapy', an approach that helped reduce the impact of the 1918 flu pandemic caused by H1N1 influenza A virus. Wavelengths of light toward the blue side of the electromagnetic spectrum — between 200 and 300 nanometers — are potent against many bacteria and some viruses. Ultraviolet light at 254nm is especially effective. Although known as UV-C or 'germicidal UV', it doesn't always kill germs — around that wavelength, the UV radiation is absorbed by genetic material, preventing genes from directing cells to produce proteins. In the case of a virus, damaged DNA or RNA might stop the viral parasite from replicating in its host cell. MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? A computer model estimated that, during summer, inactivation of SARS-CoV-2 by UV-C in sunlight should be even faster than inactivating influenza A. Reduction of viral spread through UV-C light. García de Abajo et al (2020) ACS Nano. Some scientists advocate using artificial UV-C in indoor spaces such as on public transportation, in elevators, workplaces and schools, and possibly for services like restaurants, which have a high customer turnover. This would help limit the spread of Coronavirus while allowing everyday life to get back to normal following lockdown. In principle it's a good idea, but what about the practical limitations? Direct germicidal UV-C light at 254nm is not safe for people, and can damage eyes and skin. Far-UV-C light (207–222nm), on the other hand, would kill germs without causing harm to human tissue. Physicists at the Center for Radiological Research at Columbia University recently showed that far-UV-C is effective against two relatives of SARS-CoV-2 that cause the common cold. Even at low doses, far-UV-C inactivates 99.9% of the viruses in 25 minutes. Whether it's artificial UV or natural sunshine then, the future looks bright for using light to help beat Covid-19. Full coverage and live updates on the Coronavirus
30c77c137585effed368603a8566d94d
https://www.forbes.com/sites/jvchamary/2020/06/30/face-mask-outdoors-coronavirus/?sh=56ef994a1822
Do I Really Need To Wear A Mask Outdoors? Here’s The Science
Do I Really Need To Wear A Mask Outdoors? Here’s The Science Young woman removing a face mask. Getty It's arguably the most divisive personal choice of the pandemic era: Should you wear a face mask while outdoors to help stop Coronavirus from spreading? Although the main way SARS-CoV-2 coronavirus spreads from person to person wasn't initially certain, scientists now say the world should face the reality that the virus is transmitted through the air. Many believe that airborne transmission is actually the dominant route for the spread of Covid-19. Coronavirus is carried by droplets or aerosols that leave the body when a person coughs, sneezes, breathes or talks. As a consequence, the chances of catching or spreading the virus will obviously increase when you're close to other people. The odds should be higher in a confined area, where virus-laden particles will be circulating in the air. Following this reasoning, if there's no significant risk of transmitting the virus in an unconfined space, you might not need a mask. But is it really true that you're relatively safe from worrying about Covid-19 while out in the open? Almost all studies (I'm aware of) have looked at the inside of buildings (detecting viruses in hospital rooms, for instance) and little research has compared indoor transmission to what happens outdoors. Some work is currently being peer-reviewed, however. An unpublished study of 110 Covid-19 cases in Japan is one exception. It tested individuals then used contact-tracing to follow-up on secondary cases. The results showed that people are much more likely to catch Coronavirus indoors: the odds that a primary case would transmit the disease in a closed environment was 18.7 times greater compared to the open air. MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? In an unpublished review article on how the virus spreads in different settings, epidemiologists at the London School of Hygiene & Tropical Medicine summarized what we know: "The difference in transmission risk between households and larger communal settings is unclear, as is the difference between indoor and outdoor transmission." One factor that affects the relative risk of catching a virus is how far it flies. Some scientists distinguish between 'droplet' and 'aerosol' (using a cut-off size such as 0.005mm) partly because smaller particles should fly a greater distance and hang around for longer. One laboratory experiment found that airborne SARS-CoV-2 survived for three hours. But the artificial conditions of an indoor lab might not apply to outdoor locations. It's difficult to test how far the virus travels outside because experiments are ideally carried out in a closed system (effectively a box) so that scientists are able to control for potentially confounding factors that might affect the results — and there are plenty of those in an open environment. Do air currents help or hinder the virus? Wind is one environmental influence that could potentially make an airborne virus travel further. If you believe wearing a mask is a good idea — even when nobody is nearby — it may be because you worry about being 'downwind' of someone who's infected, yet out of sight, as air currents enable the virus to be spread over a wider range. While the possibility that Coronavirus travels farther than your field of view isn't supported by evidence, a mathematical model does suggest that wind might drive cough droplets to fly over longer distances. The model, which used fluid dynamics (accounting for effects like condensation and evaporation) and consisted of two virtual humans, predicted that the influence of wind might allow airborne transmission over at least three metres (10 feet). That’s beyond the 'six-foot-rule' used for physical distancing, but it's still not that far. By contrast, some observations suggest that wind might help stop the spread. A comparison of weather in Singapore found some relationships between meteorological conditions and the number of Covid-19 cases and deaths: greater SARS-CoV-2 transmission was associated with higher temperatures and humidity, whereas wind speed and ventilation were associated with fewer cases and deaths. Although those associations might seem to suggest that windy weather helps limit the spread of disease, note that correlation does not imply causation. A negative correlation could also be caused by more people staying in when it's windy, for instance, raising population density and helping the virus to spread. The main reason why you're safer outside is ventilation. Inside closed environments like a building or vehicle, contaminated air will circulate. This is what happens in poorly-ventilated spaces and might explain why Covid-19 cases are so common on public transport and in nursing homes. Researchers in the Netherlands used lasers to track aerosols and measured their distribution, travel distance and velocity. By analyzing the droplets produced by coughs or speech from healthy volunteers, the scientists found that better ventilation substantially reduced the airborne time of respiratory droplets that can carry SARS-CoV-2, and concluded that "improving ventilation of public spaces will dilute and clear out potentially infectious aerosols." Ventilation in open environments allows virus particles to be diluted as currents continuously bring fresh air into a space. As Julian Tang, consultant virologist at the Leicester Royal Infirmary in the UK has explained, "If the wind is blowing the virus towards you, there may be an increased risk of infection. But there will also be a massive dilution factor which will generally act to reduce the exposure even if the wind is blowing it in the right direction." More space means more time. Without a host to infect, viruses in the air or on surfaces are gradually destroyed by sunshine through a double-whammy of heat and light. Heat from solar radiation will cause a coronavirus to dry-out, meaning it will lose its outer shield and the spike proteins that enable it to invade cells. A virus is like a burglar who uses a lock-pick to break into a building through a window: removing its shield and spikes is akin to stripping the burglar naked and taking away their tools. Light kills viruses by damaging genetic material, which is like leaving the burglar trapped in a factory that can manufacture clones that could then escape, but with only an incomplete instruction manual for how to operate the cell's machinery. Taken together, the science suggests that you probably don't need a face mask when going outdoors. But that statement comes with an important caveat: your decision should also depend on whether you're likely to be in close proximity to other people. If you're walking through a rural area, a mask isn't necessary, but you might consider wearing one if you're in an urban park full of picnics and barbecues, where it's harder to avoid contact with potential asymptomatic cases of Covid-19. With knowledge of your local environment, you can then calculate the risk and make an informed decision on whether to wear a mask outdoors. Full coverage and live updates on the Coronavirus
f7e4054e583526207aad73df7d1a8be4
https://www.forbes.com/sites/jvchamary/2020/07/29/coronavirus-hand-hygiene/
Which Works Best Against Covid-19: Clean Hands Or Face Masks?
Which Works Best Against Covid-19: Clean Hands Or Face Masks? Schoolgirl wearing a face mask while washing her hands. getty To stop the spread of Coronavirus, the public needs to carry out several physical interventions at the same time. And while the media focuses on the culture war over wearing face masks, we must not forget another intervention that science suggests may be even more important than a mask: clean hands. Hand hygiene is central to stopping Covid-19 from spreading by contact transmission, which occurs via routes such as touching a contaminated surface and then your face. Since around 1850, when microbiologists began developing the modern germ theory of disease and doctors started washing their hands, we've know that practicing proper hygiene helps prevent microbes from transmitting infectious diseases from one person to another. But while there's plenty of research on how good hygiene blocks the spread of respiratory viruses generally, there's relatively little knowledge of how well it works against the SARS-CoV-2 coronavirus specifically. As a consequence, recommendations from authorities like the World Health Organization and Centres for Disease Control are mainly based on extrapolating from other viruses with a similar structure, especially a fatty envelope that surrounds certain viruses. That envelope is studded with the proteins used to break into cells, and the logic goes that if an intervention is effective against another 'enveloped virus' — influenza, say — then the same should apply to novel coronaviruses. There are hundreds of studies on interventions that might interrupt or reduce the spread of respiratory viruses, but their results sometimes contradict each other. And when there's no agreement, scientists will perform a systematic review and collect all the available research in order to analyse the quality of work then reach a consensus. That's what was done in the 2010 Cochrane review, led by the Centre for Evidence Based Medicine at Oxford University. Based on 67 studies, the reviewers found that hand hygiene helps stop the spread of viruses, particularly around young children — probably because kids are less hygienic. MORE FOR YOUA 10 Billion-Year-Old ‘Super-Earth’ Has Been Found In Our Galaxy That Suggests Ancient Lifeforms Are PossibleSpaceX Launches Rocket With 143 Satellites – The Most Ever Flown On A Single MissionHow Ben Franklin Went From Anti-Vaxxer To Advocate The 2010 review wasn't conclusive, however, as it didn't identify enough studies that compared the intervention with a control. Such experiments allow reviewers to perform a 'meta-analysis' that combines data from multiple trials then offer a conclusion. An as-yet unpublished update to the Cochrane review achieved just that, combining 15 trials involving both adults and children. Those trials weren't carried out in a lab, but took place in homes, offices and classrooms — real-world settings where infections are commonly transmitted. According to the new review, hand hygiene led to a 16% drop in the number of participants with an acute respiratory illness (ARI) and 36% relative reduction in an associated outcome: people being absent from work or school. The reviewers concluded that "the modest evidence for reducing the burden of ARIs, and related absenteeism, justifies reinforcing the standard recommendation for hand hygiene measures to reduce the spread of respiratory viruses." Although the 2020 review confirms the intervention's efficacy in limiting viral transmission, it's not specific to Coronavirus. A direct link to SARS-CoV-2 is supported by one study from a Covid-19 hospital in Wuhan, China, however: from a statistical analysis of several risk factors associated with transmitting the virus, researchers found that poor hand hygiene was a major factor, raising the relative risk of infection by around 3%. The Chinese study also revealed that the higher Covid-19 risk remained even when healthcare workers wore full personal protective equipment (PPE), which suggests that hand hygiene is more important than wearing a face mask. The 2020 review also didn't find much added benefit to wearing a mask along with good hygiene. Anti-maskers might interpret such findings to mean that masks are worthless, but that would be wrong because the variation in results among studies was too large to draw any strong conclusions. Masks probably do help block viral transmission, but we won't know exactly how effective they are until we have more data. Employing only a single intervention — such as masks or handwashing — allows an infectious disease to spread because not everyone will follow the recommended guidelines and so infected people slip through the 'holes' in that intervention. When multiple interventions are used simultaneously, however, it's like stacking several slices of Swiss cheese: the more slices you add, the less likely it is that two holes will overlap and let the disease pass every intervention. While this 'Swiss cheese model' has traditionally been used in medical error reduction, it's relevant to reducing Covid-19 transmission. Regardless of the relative importance of various interventions, we should employ several strategies to stop the spread of Coronavirus. Full coverage and live updates on the Coronavirus
c4edf52c4709e76241da052cad90f178
https://www.forbes.com/sites/jvchamary/2020/10/07/crispr-genome-editing-nobel-prize/?sh=1c5d43612d32
These Scientists Deserved A Nobel Prize, But Didn’t Discover Crispr
These Scientists Deserved A Nobel Prize, But Didn’t Discover Crispr Jennifer Doudna and Emmanuelle Charpentier, winners of the 2020 Nobel Prize for Chemistry. dpa/picture alliance via Getty Images Jennifer Doudna and Emmanuelle Charpentier have won the 2020 Nobel Prize in Chemistry "for the development of a method for genome editing." The method, often simply called 'Crispr', has revolutionized molecular biology. For example, it lets scientists target DNA to make genetically-modified organisms that serve as more realistic animal models for studying diseases like cancer. Crispr also has potential applications in improving human health through gene therapy, to correct mutations that cause inherited conditions like muscular dystrophy. In the press release, the Royal Swedish Academy of Sciences claims that Doudna and Charpentier "discovered the Crispr/Cas9 genetic scissors." Besides being a clumsy phrase, that statement isn't strictly true. No, the pair did not "discover" Crispr. So why do the two female scientists deserve the Nobel Prize? What is Crispr exactly? Crispr itself is technically a sequence of chemical letters in DNA. The acronym ‘CRISPR’ stands for 'Clustered Regularly Interspaced Short Palindromic Repeats'. In nature, Crispr is half of a defence system that enables bacteria and archaea to remember and adapt to invading viruses. That adaptive immune system will cut-up the genetic material a virus injects into a microbial cell, which then pastes bits of material into its own DNA, like a memory or 'Wanted poster' that allows the microbe to rapidly recognize and destroy similar viral invaders. MORE FOR YOUIn Photos: This Is What We Will Look Like In 5.5 Billion Years When The Sun Is DyingConservation Biologist Murdered In Colombia Saved Two SpeciesA $100 Million Message From Aliens Next-Door? What We Know About The Mysterious ‘BLC1’ Radio Signal A natural Crispr-Cas9 system in bacteria. CC BY-SA 4.0 Guido Hegasy The ofter half of the system are the enzymes that do the cutting and pasting. One of the most important is 'Crispr-associated protein 9' or 'Cas9' — which, together with the Crispr sequences saved in DNA, forms the Crispr-Cas9 system. How does gene editing work? Scientists modified the natural system to create an artificial tool that can harness Crispr-Cas9's power to recognize, cut and edit DNA. In the lab, a Crispr system is used to target and cut a specific sequence of DNA letters then delete or insert genetic material — even an entire gene — into that exact location, which results in precise genome editing. The cutting process is key as it prompts repair machinery to fix the break in DNA. Like photos used in reconstructive surgery, the cell's machinery needs a template to repair what's missing. That template is provided by a short piece of genetic material carried by the Cas9 protein, a 'guide RNA'. Why did the two scientists win? Two men can be credited with discovering of Crispr: Yoshizumo Ishino detected "unusual DNA" inside bacteria in 1987 and in 1993 Francisco Mojica described it as "repetitive". Researchers only realized it was an immune system in the mid-2000s. Structural biologist Jennifer Doudna and microbiologist Emmanuelle Charpentier were instrumental in turning Crispr into a tool. In 2012, biochemist Virginijus Šikšnys had shown that 'crRNA' — an RNA copy of a Crispr DNA sequence — could be customized to direct Cas9 to cut new DNA targets. The guide RNA that cells use as a template to fix DNA consists of 'crRNA' and another component: 'tracrRNA'. An artificial Crispr tool for genome editing. CC BY-SA 4.0 Guido Hegasy After Charpentier uncovered tracrRNA in the bacterium Streptococcus pyogenes (Yes, here you could say "discovered") in 2011, she and Doudna fused crRNA with tracrRNA to create a single guide RNA, or 'sgRNA'. That 2012 breakthrough meant that researchers only need to redesign one component — sgRNA — to create molecular scissors that can be programmed to target a specific DNA sequence, leading to a simple yet powerful gene-editing tool. Who deserved the Prize? Two other men are often mentioned as 'inventors' of Crispr, partly due to an ongoing patent battle over the technology. In 2013, bioengineer Feng Zhang and geneticist George Church independently showed that a system can be used to edit DNA in mammals, arguably the most challenging technical hurdle. Like many recent discoveries, however, the history of Crispr research is biased toward senior scientists and (as Church pointed out) it ignores the contribution of junior researchers who conceive and carry out experiments, the unsung heroes. Almost every Nobel Prize comes with controversy. This year's award marks the first time two women have shared a prize without a male recipient, but you could argue that Šikšnys deserved to share the glory. Relative to past awards, however, the 2020 Chemistry prize is one of the least controversial. Making it easy to design an RNA guide so Crispr can target specific DNA means that, compared to rival genome editors, unintended 'off-target' effects will be relatively rare. For their role in developing programmable molecular scissors, and because prizes are only awarded to individuals, Doudna and Charpentier are worthy winners.
e3deb29b2b9b2abf1a7d93c91e247493
https://www.forbes.com/sites/jvchamary/2020/11/30/coronavirus-mrna-vaccines/
RNA Vaccines Are Effective Against Covid. What’s The Major Flaw?
RNA Vaccines Are Effective Against Covid. What’s The Major Flaw? COVID-19 vaccine vial and syringe. getty Among 200-odd competitors in the race to create an effective vaccine against the SARS-CoV-2 coronavirus, mRNA vaccines have emerged as the surprise frontrunners. Two candidates, from tiny American firm Moderna and pharmaceutical giant Pfizer (teamed-up with German partner BioNTech) claim to boast about 95% effectiveness in protecting people from catching Covid-19. The active agent in both candidate vaccines is 'messenger RNA' or mRNA. That begs the question: What seems to make RNA so good at promoting immunity? What makes mRNA vaccines different? The impressive figures from mRNA vaccines were surprising because initial media coverage suggested that a more high-profile candidate like the so-called 'Oxford vaccine' ('ChAdOx1 nCoV-19' or officially 'AZD1222') — developed by AstraZeneca and Oxford University — would be first to cross the finish line. Instead it was Pfizer and Moderna who announced that their drugs had almost completed Phase III clinical trials, which test safety and efficacy in thousands of participants. Vaccine candidates are grouped into six main types based on the technology (or 'platform') used to expose your immune system to viral molecules, enabling it to recognize and target a virus if/when your body's defences encounters that virus. Five of those six technologies are focused on exposing the body to proteins — either on the surface of a virus, 'vector' (vehicle) or as protein subunits. The viral proteins serve as 'antigens' — molecules that prompt your immune system to generate antibodies. MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? How do vaccines deliver antigens? After exposure to an antigen, the adaptive immune system learns to recognize — and then target — that specific antigen. For SARS-CoV-2, a vaccine's target is the spike protein that enables the virus to invade a cell. Here's what makes an mRNA vaccine different. Rather than exposing your defences to a protein directly, the antigen is delivered indirectly via a very roundabout route: the mRNA molecule carries the genetic instructions for producing a spike protein and, once inside a cell, that spike gene is read by cellular machinery to make spike proteins. It's a weird way to expose an antigen, but it works. Essentially, the body receives a target antigen — such as the coronavirus spike protein — despite the fact that an mRNA vaccine doesn't itself contain the antigen. How does your body detect antigens? Most cells continuously collect and present the molecules within themselves on their outer membrane. They do that to prove to the immune system's sentinels — such as T cells — that they aren't infected by a virus, as the proteins that a virus produces while replicating should appear on an infected cell's membrane. Sentinels use surface receptors to check the proteins on a potentially-infected cell. That 'antigen-presenting cell' (APC) isn't invaded by a real virus if you receive an mRNA vaccine — RNA molecules are transferred — so it's technically not 'infected' but 'transfected' with a gene that encodes the spike antigen. The various cells interact so that the adaptive immune system can recognize the antigen — encoded by spike gene on the mRNA — presented by a transfected cell (labelled 'mRNA-transfected APC' in the following figure from a recent review in the journal Current Opinion in Immunology). That leads to immune responses once the antigen is seen, including the release of matching antibodies by B cells. Representation of unknown factors in mRNA vaccine design. Elsevier / Pardi et al (2020) Current Opinion in Immunology 65: 14-20 Why are RNA vaccines effective? Why are mRNA-based vaccines so effective? The answer to that question requires some speculation. One possibility is that having a cell read the spike gene to make a protein — instead of delivering the fully-formed spike protein — more closely mimics what happens when Coronavirus replicates during a natural infection. Pfizer has developed two drug candidates, BNT162b1 and BNT162b2: BNT162b1 only includes instructions for producing the spike's tip, while BNT162b2 — the one with 95% efficacy — produces a full-length protein. That might suggest that the immune system responds better when exposed to the entire spike protein. Another possible explanation is that extra modifications to the mRNA molecules — affecting how the message is read, like adding notes to the instructions — influence effectiveness. Indeed, early RNA vaccines didn't deliver the molecule efficiently and caused excessive inflammation. But thanks to recent advances in mRNA vaccine technology, particularly in engineering and manufacturing, they're now effective against several types of cancer and infectious pathogens. Is RNA better than DNA? In principle, the spike gene could be made of DNA or RNA. So why do the mRNA vaccines developed by Pfizer and Moderna use one nucleic acid over the other? DNA consists of two complementary strands that forms the famous double helix, while RNA is a mostly single-stranded molecule. Roughly speaking then, DNA is twice as thick as RNA, which makes it much harder to squeeze DNA through the small pores in a cell's membrane. As a consequence, vaccination with DNA requires a special tool that applies an electric field to widen pores (an electroporator), which is obviously less practical than using common equipment like a syringe. So yes, RNA is better than DNA — at least when administering a vaccine. But RNA molecules from an injected vaccine don't have an easy path through the body either, and must overcome several obstacles to reach a cell. First, mRNA is one long — exposed — strand, which makes it vulnerable to being chopped-up by enzymes in the blood. The solution is to enclose the RNA molecule in protective packaging: a fatty bubble called a 'lipid-containing nanoparticle' or LNP. The bubble not only protects its contents from degradation, the lipid helps the nanoparticle fuse with a cell's fatty outer membrane — like a small oil droplet merging with a larger drop — and deposits the RNA inside the cell. An 'mRNA-LNP' was merely a proof-of-concept when it was demonstrated in 2015, but the technology is now the basis for both Pfizer and Moderna's vaccines. What's the major flaw? The biggest challenge for mRNA vaccines is that RNA molecules are inherently unstable. Whereas the DNA double helix is a stable structure, an RNA molecule will soon fall apart at ambient temperatures — it has low thermal stability. Conventional vaccines are stored in refrigerators at between +2 and +8ºC (36 to 46ºF) but the two mRNA drugs need to be frozen: Moderna's mRNA-1273 must be kept in a regular freezer at -20°C for six months (or a refrigerator at around 5ºC for 30 days) while Pfizer's BNT162b2 requires an ultra-low temperature (ULT) -70ºC freezer. If a vaccine isn't kept cold, it won't be effective — or it might not work at all. So it's no surprise that researchers are already developing candidates that can be stored at room temperature (one liquid lasts over a week). If successful, thermostable mRNA vaccines would overcome the molecule's major flaw. Full coverage and live updates on the Coronavirus
14b6e7bb16a989fc5118c27fa2c418a3
https://www.forbes.com/sites/jvchamary/2020/11/30/crispr-hiv-monkey-dna/
Scientists Used Crispr To Edit HIV-Like Viruses In Monkey DNA
Scientists Used Crispr To Edit HIV-Like Viruses In Monkey DNA A macaque monkey. AFP via Getty Images Scientists have used Crispr gene-editing to remove an HIV-like virus from monkey DNA, a major step towards a cure for HIV infection in humans. In a study led by neurovirologist Kamel Khalili of Temple University in Philadelphia, researchers constructed a modified adenovirus containing a Crispr-Cas9 gene-editing system. That 'construct' (called 'AAV9-CRISPR-Cas9') was then injected into rhesus macaque monkeys to deliver the Crispr system into cells. The monkey cells were infected with SIV (Simian Immunodeficiency Virus), a close relative of HIV (Human Immunodeficiency Virus). Both are retroviruses — viral parasites that cut-and-paste their genetic material into a host's DNA. SIV infects macaques and other non-human primates in the same way that HIV infects people, making it a good model for studying retroviral infection — and testing how to remove those viruses from the human genome. The gene-editing construct was designed to target specific sites where the retrovirus was integrated into the macaque genome. It was able to reach tissues where viruses like SIV and HIV can hide for years without being detected, known as reservoirs, such as bone marrow, lymph nodes, T cells of the immune system and the brain. According to the study, the construct was precise and has a low risk of cutting the wrong places in DNA ('off-target' sites). The research has obvious implications for preventing or treating AIDS (Acquired Immunodeficiency Syndrome) in humans by curing a patient of HIV infection.
b74faca325be6ba83dbf14fbc9fe5921
https://www.forbes.com/sites/jvchamary/2020/12/22/new-species-tree-hyrax/
This New Species Lives In Trees And Is Related To Elephants
This New Species Lives In Trees And Is Related To Elephants Tree hyrax from Taita Hills, Kenya. Hanna Rosti Biologists may have discovered a new species of nocturnal mammal in the tropical forests of Taita Hills in Kenya. Although the mammal, known as a tree hyrax, resembles a rabbit with short ears, its closest living relatives are actually elephants. The biologists managed to capture one surprised-looking creature in a night-time photo. In a study published in the journal Diversity, researchers at the University of Helsinki, Finland, analyzed the calls or 'vocalisations' of three little-known tree-dwelling (arboreal) mammals: the tree hyrax — a species of Dendrohyrax — and two galagos, small lemur-like primates often called 'bush babies'. While the calls from the galago species — Paragalago and Otolemur — are relatively uniform, those from the tree hyrax are more diverse. And when those vocalisations were compared to calls from related animals, it suggested this hyrax is a new species. Hyraxes can produce loud screams that reach over 100 decibels, but the acoustic analysis revealed that the new hyrax also produces a characteristic call that's distinct from vocalisations made by other Dendrohyrax species, songs that the scientists describe as a 'strangled thwack'. Calls made by three arboreal mammals. CC BY 4.0: Rosti et al (2020) Discovery 12: 473 MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? The audio shows that each song consists of several syllables, which are combined and repeated in different ways to create a call that can last for more than 12 minutes. According to Hanna Rosti, a PhD student who spent three months recording the vocalisations, "The singing animals are probably males attempting to attract females that are willing to mate."
2cfce70109a3bf8603ab866966474550
https://www.forbes.com/sites/jvchamary/2020/12/23/bionic-touch/
People With Bionic Hands Can’t Correct Their Sense Of Touch
People With Bionic Hands Can’t Correct Their Sense Of Touch Prosthetic arm with a bionic hand which detects tactile stimulation and sends that sensory signal to ... [+] the brain. Johan Bodell / Chalmers University of Technology Neuroscientists have found that people with bionic hands can't seem to retrain the brain so that tactile sensations match locations on their robotic limbs. When a person touches something with the artificial thumb, for instance, they might think that sensory input came from another part of their prosthetic hand, which suggests that neural circuits are less flexible than scientists thought. The finding was made possible thanks to recent advances in prosthetic limbs. In early 2020, a team led by Max Ortiz-Catalan at Chalmers University of Technology in Gothenburg, Sweden, implanted bionic limbs in four people whose arms had been amputated above the elbow. Each prosthesis is anchored to the humerus bone and has an artificial skeleton with muscles controlled by a user's neural activity. That 'neuromusculoskeletal' prosthesis also includes sensors in its bionic hand that send tactile feedback to nerves in a person's upper arm — signals that are relayed to the brain to provide a sense of touch. Surgeons can connect the hand's sensors to a person's nerves via electrodes — but not necessarily at the same point in a circuit that was used by the original arm (because nerves are organised in a relatively arbitrary way), which means an artificial thumb might become wired to circuitry that expects to receive signals from an index finger, for example. Neural circuits aren't always hard-wired, however: there's some flexibility or 'plasticity' in the connections at synapses between nerve cells (neurons), which is what caused many scientists to assume that the brain would be able to rewire synaptic connections. For a prosthesis, that would mean sensory signals from a bionic thumb could be relayed via nerves originally used by an index finger to be correctly interpreted as sensations from a thumb, rather than the finger. MORE FOR YOUA $100 Million Message From Aliens Next-Door? What We Know About The Mysterious ‘BLC1’ Radio SignalConservation Biologist Murdered In Colombia Saved Two SpeciesHow Ben Franklin Went From Anti-Vaxxer To Advocate People cannot remap perceived location of a sensation to the stimulated area. Ortiz-Catalan et al (2020) Cell Reports 33: 108539 But according to the latest research by Ortiz-Catalan and colleagues, the brain can't rewire circuits to match the correct tactile sensation from a bionic hand. The new study involved three people with amputations who had used a robotic hand to manipulate objects for more than 12 hours at home every day, for over a year. And yet despite their extensive practice, the study's participants never managed to remap their subjective sensations from a perceived location to the stimulated area, from where they thought they felt something to where it actually touched. So it appears that people can't simply fix their sense of touch, which implies that developing prosthetic limbs that feel like real ones will be more challenging than researchers once believed.
37d54fe7f73aef25f0e7596c7a7bb04c
https://www.forbes.com/sites/jvchamary/2020/12/23/gut-bacteria-optogenetics-lifespan/?sh=797b69d21e4f
Could Controlling Bacteria With Light Extend Your Lifespan?
Could Controlling Bacteria With Light Extend Your Lifespan? Illustration of Enterobacteriaceae, a family of bacteria that includes the gut-dwelling species ... [+] Escherichia coli. getty Scientists suggest that light could be used to manipulate the metabolism of gut bacteria to prolong our lifespans. The approach would involve a technique called 'optogenetics', where cells are engineered to have genes that respond to light, meaning you can then control an organism's genetic activity — even its behavior — through light. Optogenetics allows you to switch certain genes on or off at will, so some scientists believe it could be used to activate metabolic processes within gut bacteria that would benefit the health of their human hosts, which might help extend your lifespan. That potential application has now been demonstrated in a proof-of-concept study led by synthetic biologist Jeffrey Tabor of Rice University and ageing researcher Meng Wang at Baylor College of Medicine in Houston, Texas. In a paper titled 'Optogenetic control of gut bacterial metabolism to promote longevity', the pair found that they could use different colors of light to turn-on specific genes in the bacterium E. coli inside the intestine of a nematode worm, Caenorhabditis elegans. The genes are involved in producing a metabolite called 'colanic acid'. In an earlier study, Wang showed that worms would live up to 45% longer if their gut contained bacteria that were genetically engineered to make extra colanic acid. Colanic acid promotes longevity by helping mitochondria, the energy-generating 'power stations' inside cells. Mitochondria become less efficient with age, but colanic acid seems to enable the generators to function more efficiently again. MORE FOR YOUA 10 Billion-Year-Old ‘Super-Earth’ Has Been Found In Our Galaxy That Suggests Ancient Lifeforms Are PossibleThis College Professor Became An Overnight Billionaire Fighting CovidThis Week’s ‘Wolf Moon’ Kicks-Off A Year Of Three ‘Supermoons,’ A ‘Blue Moon’ And A ‘Blood Moon’ Light-responsive bacteria can be seen inside a worm's gut thanks to red and green fluorescent ... [+] proteins that glow when treated with green-colored light. Jeff Tabor / Rice University In the latest research, Tabor's team created E. coli that would only produce colanic acid when exposed to green light. The success of using optogenetics was highlighted by linking colanic-acid genes to those for two fluorescent proteins. Under a microscope, the bacteria would glow from green fluorescent protein (GFP) or a red fluorescent protein (Cherry) when treated with light, which showed that the bacterial cells were indeed producing colanic acid. When treated with green light, the gut bacteria produced more colanic acid and their worm hosts lived slightly longer (an increase of up to 3 days, around 30%). The extent of such life extension depended on the strength of the dose. As Wang says, "The stronger the light, the longer the lifespan." The community of microorganisms that colonize a human body — the 'microbiome' — affects health, and studies have shown that bacteria in our gut microbiomes influence our susceptibility to a wide range of conditions — everything from obesity and diabetes to cancer and cardiovascular disease. As your gut bacteria are effectively part of your inner metabolism, controlling them through optogenetics could reduce your risk of certain conditions. While it can be hard to target a particular part of the body — such as your gut — using approaches such as drugs, a beam of light could switch-on life-enhancing genes in modified microbes at a specific time and place within your digestive system. "The goal, the thing you really want, is gut bacteria you can eat that will improve health or treat disease," says Tabor. "Light is really the only signal that has enough precision to turn on bacterial genes in the small versus the large intestine, for example, or during the day but not at night."
e912b24a7f69507db0b3af90ff92c2c0
https://www.forbes.com/sites/jvchamary/2020/12/24/chromosomes-cancer-drug-resistance/
Shattered Chromosomes Help Create Drug-Resistant Cancer Cells
Shattered Chromosomes Help Create Drug-Resistant Cancer Cells Shattered DNA double helix (for illustrative purposes only). getty Researchers have found that cancer cells contain shattered chromosomes that can allow a tumor to become more aggressive and help it resist chemotherapy drugs. Cancers evolve over several steps, and one of the earliest occurs when mutations create a tumor — a group of abnormal cells. A mutation can involve anything from changing a single letter in DNA to rearranging an entire chromosome. The most extreme rearrangement is to shatter chromosomes, a phenomenon known as 'chromothripsis' — Greek for 'shattered color' (chromosome means 'colored body'). Breaking DNA into (sometimes hundreds of) pieces separates genes, allowing a tumor to gain novel combinations that increase the activity of key genes whose actions may decrease the effectiveness of anti-cancer drugs. Shattering chromosomes leaves behind rings of genetic material called 'circular extrachromosomal DNA', or ecDNA. According to a 2019 study by researchers at the University of California at San Diego, La Jolla, up to half of all cells in many types of cancer have rings that help tumors grow, and their latest research now shows that ecDNA also enables them to develop resistance to therapeutic drugs. Chromosomes (blue arrows) and circular extrachromosomal DNA (orange) inside a cancer cell. Paul Mischel / UC San Diego The new study involved reading DNA sequences in tumor cells during a cancer's evolution. That genome sequencing showed that chromosomes were repeatedly shattered while the cells divided, which allowed them to acquire various genetic combinations from their circular extrachromosomal DNA — including genes for resistance to methotrexate, a drug widely-used in chemotherapy. MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? A tumor can amplify the genetic activity for resisting anti-cancer drugs because the genes in ecDNA rings are repeatedly copied while outside of a chromosome. Worse, as chemotherapy and radiotherapy can damage chromosomes, cancer cells can then reintegrate that circular extrachromosomal DNA back into their genomes. Shattered chromosomes and amplification of drug-resistance genes has revealed yet another reason why cancers are so effective at evolving to evade treatment.
150716f176f87d67752a1114070879ec
https://www.forbes.com/sites/jvchamary/2021/01/13/tree-snake-lasso-locomotion/
This Snake Can Climb Trees Using A Strange New Technique
This Snake Can Climb Trees Using A Strange New Technique Brown treesnake on a pole. Bruce Jayne Scientists have discovered a new technique that some snakes use to climb trees. The previously unknown mode of locomotion was observed in the brown treesnake, which forms a loop around a trunk with its body, then wiggles its way up the tree. Snakes have four main modes of locomotion. The most well-known are 'lateral undulation' — which creates the signature serpentine S shape — and superficially similar 'sidewinding' motion used by rattlesnakes. There's also a 'rectilinear' wave of muscle contraction along the body (driving an earthworm-like movement) and 'concertina'. Biologists previously thought that snakes could only climb smooth, vertical cylinders such as tree trunks using concertina locomotion, where a snake's body is stretched out and grips the cylinder in two or more places, sticking to the surface through friction as the tail end is pulled up toward the head. Concertina locomotion. Savidge et al (2021) Current Biology 31: R1-R8 But a study has now revealed that snakes can also climb using 'lasso' locomotion. After analysing videos of the brown treesnake (Boiga irregularis), researchers at Colorado State University and the University of Cincinnati observed that the animal adopts a posture that produces a large loop and shifts small bends along its lasso-like body to move upward — a bit like wiggling a wedding ring off your finger. MORE FOR YOUA 10 Billion-Year-Old ‘Super-Earth’ Has Been Found In Our Galaxy That Suggests Ancient Lifeforms Are PossibleA $100 Million Message From Aliens Next-Door? What We Know About The Mysterious ‘BLC1’ Radio SignalConservation Biologist Murdered In Colombia Saved Two Species The behavior enables the treesnake to climb wider cylinders: whereas concertina locomotion uses at least two gripping regions, each as long as the circumference of a tree trunk, the single gripping region in a lasso only needs to be a little longer than the snake's body length. Despite its benefits, lasso locomotion is slow and requires a lot of energy. In the new study, a snake would sometimes slip back down and often paused to catch its breath. It could only climb an average of four centimetres per minute. Lasso locomotion. Savidge et al (2021) Current Biology 31: R1-R8 The study was the byproduct of a conservation project that aims to protect native birds on Guam, including the Micronesia starling — one of only two forest species that survived after bird populations were decimated when the brown treesnake was accidentally introduced to the Pacific island in the late 1940s or early 1950s. Lasso locomotion would explain why the invasive species was so destructive to the island's ecosystem, as its ability to climb trees might have allowed it to access eggs in nests — to exploit resources that were once out of reach from predators. (The snakes also climb electrical poles to find food, causing short circuits and power cuts.) One upside of the discovery is it could help conservationists protect the native species. A metre-long (three-foot) metal cylinder around a pole or trunk is normally big enough to prevent pests from climbing a tree, but the study showed that such a cylindrical 'baffle' isn't much of an obstacle for a brown treesnake. Now that researchers know that lasso locomotion occurs, they can design better baffles — ones that will actually stop the invasive species from reaching endangered birds.
8683d86ed0e53888a63df1f778e12f38
https://www.forbes.com/sites/jvchamary/2021/01/14/sleep-brain-evolution/
Hail Hydra! Even Brainless Animals Need Sleep, Say Biologists
Hail Hydra! Even Brainless Animals Need Sleep, Say Biologists Hydra. CC BY-NC-ND 2.0 Jasper Nance / https://flic.kr/p/2JShUR Sleep is one of biology's biggest mysteries. Every species with a nervous system has some form of resting period, and so one theory for why animals sleep is that it helps maintain the brain — allowing an organism to reinforce or remove neural connections made in learning and memory. But not all animals have a central nervous system with a distinct brain, and scientists have now found that even brainless animals have a sleep-like state. The finding comes from a study by researchers at Kyushu University in Fukuoka, Japan, and Ulsan National Institute of Science and Technology in South Korea, where biologists studied Hydra vulgaris — a tiny jellyfish-like creature (1-3cm long) with a network of nerves but no centralized structure (brain). The study showed that unlike many animals, whose body clock revolves around a roughly 24-hour-long circadian rhythm, Hydra follow a 4-hour sleep-wake cycle. As sleep is typically monitored via brain waves but Hydra are brainless, the researchers used videos to track whether the animal was in a sleep-like state based on the amount of movement. They also measured genetic activity after using temperature and vibration to create sleep-deprived Hydra, which revealed that sleep is controlled by 212 genes, including a gene the produces 'PRKG1' — a key protein that regulates sleep in everything from flies and nematode worms to mice and other mammals. In the study, researchers looked at how the tiny aquatic creatures responded when given chemicals that affect sleep in more complex animals such as humans. Some molecules had a similar effect on Hydra — PRKG1 and the sleep hormone 'melatonin' encouraged the creature to sleep longer and more frequently, for example — while another chemical had the opposite effect: whereas 'dopamine' causes arousal in many animals, it actually prompted Hydra to feel sleepy. MORE FOR YOUA Harvard Astronomer Is More Confident Than Ever That An Alien Probe Visited Us In 2017A Rare ‘Triple Conjunction’ Of Planets As Our Brightest Star Sparkles: What You Can See In The Night Sky This WeekAsk Ethan: Does The Expanding Universe Break The Speed Of Light? The new study helps answer another big question in the evolution of animals: Which came first, sleep or the brain? Because the common ancestor of all animals probably resembled something like a 'primitive' hydra, the above findings suggest that the origin of sleep predates the brain, meaning it evolved before nervous systems became more sophisticated.
4b29f0caed2e875110ac979a2c53e7e7
https://www.forbes.com/sites/jvchamary/2021/12/30/science-parody-music-videos-songs/
The 10 Best Science Parody Music Videos
The 10 Best Science Parody Music Videos 'Lab Rules' is AsapSCIENCE's science parody of Dua Lipa's music video 'New Rules'. AsapSCIENCE / YouTube To cheer you up after a depressing year, I've compiled a list of the ten greatest scientific music videos from the past 10 years (Yes the decade ended in 2019, but 2020 didn't really count...) While the 2000s gave us brilliant songs like the 'Large Hadron Rap' and 'GTCA', the 2010s produced even better parody videos. I've picked the following parodies based mainly on two criteria — lyrical genius and stupid dancing — so prepare yourself for some geeky yet goofy videos that are both clever and cringeworthy. Bad Project (2011) Created by the Hui Zheng Lab at Baylor College of Medicine, this parody of 'Bad Romance' by Lady Gaga set the standard for videos with low production values yet high entertainment value. It inspired dozens of clones and begins with a corner caption that pays homage to when MTV stood for 'Music Television'. Its lyrics perfectly capture the pain of being a PhD student in a molecular biology lab. NASA Johnson Style (2012) En route to becoming the first YouTube video to reach one billion views, PSY's 'Gangnam Style' became a source of countless parodies. Of all the science-based ones, the best came from NASA's Johnson Space Centre, which has itself rocketed to over 7 million views. Besides the cool computer graphics and launch footage, what makes this video great is its hilariously bad acting and terrible in-jokes. MORE FOR YOUA $100 Million Message From Aliens Next-Door? What We Know About The Mysterious ‘BLC1’ Radio SignalConservation Biologist Murdered In Colombia Saved Two SpeciesHow Ben Franklin Went From Anti-Vaxxer To Advocate Bohemian Gravity (2013) Tim Blais, a master of physics and music, does an incredible job of explaining scientific theories by singing a capella. Although visuals from his songs are usually designed to illustrate those ideas, this version of 'Bohemian Rhapsody' does follow the format of Queen's original music video, and the tune's tempo works really well for covering the complex concepts found in string theory. I Don't Know (2014) Among dozens of YouTube videos about medical students, this parody of 'Let It Go' from Frozen comes top of the class. Although it's more about doctors than science, it gets a pass because its chorus is the starting point for scientific research. Made by the University of Chicago's School of Medicine, it was written and sung by Beanie Meadow, whose impressive vocals sometimes rival Idina Menzel. Mitosis Bling (2015) Tom McFadden — formerly known as 'the Rhymebosome' — is a biology teacher and YouTuber with a real way with words. He runs 'Science Rap Academy' for kids (see the next entry on this list) that produces videos which deserve to get millions of views. His polished parodies closely mirror the style of the source material — as reflected in this short-but-sweet version of Drake's 'Hotline Bling'. My Shot — Vaccine Version (2017) Thanks to the Covid-19 coronavirus vaccines, this parody of 'My Shot' from the musical Hamilton by Lin-Manuel Miranda has become topical, as it mentions herd immunity and explains why vaccination is vital for those with immunodeficiency. It's one of many amazing videos written, performed and edited by the talented pupils on the 'Science Rap Academy' course at the Nueva School in California. Postdoc Me Now (2018) While Neuro Transmissions has produced several videos set in molecular biology labs, this parody of 'Don't Stop Me Now' by Queen (a song without a proper music video) is the spiritual successor to the first video on this list. The star of 'Bad Project' was someone at the start of her PhD and 'Postdoc Me Now' focuses on researchers who are about to submit their thesis and are ready to move on. Pi Pi Pi (2018) Everyday Science is another channel whose videos should get more views, starting with this math version of 'Bye Bye Bye' by *NSYNC. YouTube has numerous songs to help you memorize digits of pi but this boy-band parody is pure entertainment — it adds geometry and history to dance choreography, plus a story sequence before the song actually begins, which gives you that feeling of a genuine music video. Lab Rules (2018) AsapSCIENCE has made over 300 educational videos but this "low cost" version of Dua Lipa's 'New Rules' suggests they should be making more parodies. Copying the moves from the original music video is impressive and their parody adds splashes of color to the stereotype of scientists in white coats, along with a message about laboratory safety protocols and handling potentially dangerous chemicals. The Elements (2019) I'm cheating with this last video because it's based on a classic science song from 1959: 'The Elements' by Tom Lehrer, which is itself a parody of 'I Am the Very Model of a Modern Major-General' by Gilbert and Sullivan from 1879 musical The Pirates of Penzance. The lyrics were updated by Helen Arney and the video, produced by Chemistry World magazine, features chemists from around the world. So there you go, the greatest scientific parodies of music videos from the past decade. If you disagree with my choices, direct your nerd rage at me on Twitter (@jvchamary) with links to what I should have included on this list.
d4f9ade59667e4ec4aab3133fc7221b3
https://www.forbes.com/sites/jvdelong/2015/03/16/the-structure-of-climate-change-revolutions-its-the-sun/
The Structure Of Climate Change Revolutions: It's The Sun
The Structure Of Climate Change Revolutions: It's The Sun From one perspective, EPA’s proposed Clean Power Plan (CPP) is a triumph of the Church of the Environment, a bold effort to remake the electric grid in response to the assumed imperative of reducing carbon dioxide emissions. From another angle, CPP looks different. It is the last gasp of a dying scientific paradigm, one fated to join the museum of oddities of science, such as phlogiston, the idea that bleeding a patient is the road to health, and the rejection of plate tectonics theory. Half a century ago, T. S. Kuhn wrote his famous The Structure of Scientific Revolutions. He posited that science does not proceed in orderly fashion, with discoveries building on each other in steady progression. Instead, it proceeds by fits and starts. In a given field, an over-arching explanation – a paradigm – will dominate, providing the frame of reference and identifying legitimate areas of inquiry. Kuhn calls this “normal science.” Over time, however, anomalies arise, as observations do not accord with the paradigm, and the theory must be modified in complex and sometimes ad hoc ways. Eventually, the anomalies, inconsistencies, and ad hoc character of the patches become overwhelming, and the field is ripe for a new paradigm that better fits the observations. The classic example is the Copernican Revolution. The theory that the solar system revolves around the earth became burdened with increasing complications, complications that disappeared when the paradigm of heliocentricity came to the fore. Decaying paradigms do not always go gently. Fierce conflict is common, but as physicist Max Planck said, “Science advances one funeral at a time,” and eventually the old guard disappears. Climate change science is on the edge of Kuhn-style revolution. For 30 years, the dominant paradigm has been that carbon dioxide is the major driver of changes in global temperature. Inquiry has been directed at showing why and how this is so, not at the question whether it is so. There is a logic to this CO2-centrism. Temperatures have risen over the past century at the same time that both atmospheric concentrations of CO2 and emissions from human sources have increased. No other obvious explanation presented itself, and the alternative that “the climate just changes” has seemed unsatisfactory. Because the whole system depends on the sun, an alternative paradigm might have focused on solar possibilities, and some scientists have steadily maintained this view. However, solar explanations are dismissed by the International Panel on Climate Change (IPCC) – the instrument and voice of the CO2 Establishment -- on the grounds that the sun’s energy output does not vary enough to account for climate changes over the past century. (IPCC, Summary for Policymakers, p.19.) The traction attained by the CO2 paradigm is helped by the reality that panic over CO2 serves the interests of governments, which use it to gain power over industrial economies, of rich and powerful ideological and green-energy forces, and of many scientists, who have built careers on CO2. The combination of forces ensures that work treating CO2 dominance as a given is easily funded, and work questioning it is not. During the past decade, two things have happened. First, the CO2 paradigm has grown shaky. Anomalies have increased, and the theory can be made to fit real-world observations only by a rickety structure of unproven, and often untestable, assumptions and fudge factors. The paradigm is not meeting the basic litmus test of science, which is creation of hypotheses that can be confirmed or refuted by real data. Second, scientists are taking a second and more sophisticated look at the sun, and finding that a heliocentric paradigm better explains the data. Furthermore, the solar paradigm is developing and confirming testable propositions. Take these in order. CO2 The basics of the CO2 paradigm are straightforward. The sun delivers energy to the earth. About 30% of it is reflected back into space, and the rest absorbed. The earth then gives up the stored energy, radiating it outward. Some of this outward radiation is absorbed by Greenhouse Gases (GHGs), primarily CO2 and water vapor, and re-radiated back to earth. The general effect is to keep the earth’s temperature at 14º C instead of the –18º C it would be in the absence of GHGs. CO2 captures only certain wavelengths of energy, so there is a saturation effect as all the energy of a particular wavelength is absorbed. A doubling of the concentration of CO2 in the atmosphere (from, say, 200 parts per million to the present-day 400 ppm) would raise the earth’s temperature by 1º C. A subsequent doubling, from 400 to 800, would raise it only another 1º C. And so on. Other GHGs absorb energy on different wavelengths, so each must be examined separately. A warming of one degree would be beneficial. The earth has been warmer in the past, as in the Medieval Warm Period of 1000 years ago, and we have still not fully emerged from the Little Ice Age that ended in the mid-19th century. Low temperatures make for hard times. Water vapor is a far more important GHG than CO2 and accounts for over 90% of the greenhouse effect, In consequence, to attribute to CO2 an alarming rate of temperature increase requires that a link be made between CO2, cloud cover, and water vapor such that positive feedback exists. That is, temperature will rise more than one degree only if an increase in CO2 also leads to an increase in water vapor, and fosters some kinds of clouds while discouraging others. These links are put together in the form of models. The interactions of the climate are complex, and a model can easily contain up to a million lines of code. The problem for the CO2 paradigm is that the models do not work. They failed to predict the flat temperatures that have prevailed since 1998, and they cannot be backfit to predict temperature patterns that prevailed before 1970. They have some accuracy for the period 1970 to 1998 only because they were fudged to track the temperature record by making whatever assumptions about the water vapor feedback and about the effects of particulate matter in the atmosphere were necessary to track the observed past temperatures. In fact, evidentiary support is lacking for the basic proposition that increases in CO2 trigger an increase in water vapor. Recent data indicates the reverse, that feedback is negative in that a rise in CO2 reduces water vapor, which tempers the heating effect. If this data is confirmed by further experiments, all of the CO2-centric models will crash and burn immediately, because all are based on the opposite assumption. This cursory account only scratches the surface of the problems with the CO2 models; one must look at the oceans, where the theory is also in disarray, and at the record of prehistoric times, when increases in CO2 concentration appear to have lagged behind rises in temperature. Sun A few scientists have reinvigorated an alternative paradigm: the sun. The current state of their work is recounted in The Neglected Sun, a best-seller in Germany published in English only in late 2013, by Professor Fritz Vahrenholt, a German scientist, industrialist, and environmentalist of impeccable pedigree, and Sebastian Luning, an expert in geology and paleontology. As noted, it has long seemed logical that earth’s climate might be related to the 11-year sunspot cycle, but the hypothesis lacked explanatory power because the variation in solar energy over the cycle is only 0.1%, not enough to account for climate changes. Recent research indicates that this conclusion too simplistic. Total irradiance varies little with the sunspot cycle, but ultraviolet irradiance fluctuates by up to 70%, and UV light is converted to heat in the earth’s atmosphere. The sun’s magnetic field also fluctuates with the 11-year cycle, and this affects cosmic rays striking the earth, according to a theory associated with Henrik Svensmark. When the magnetic field is weak, more cosmic rays penetrate, and these seed clouds in the lower atmosphere, which has a cooling effect. “Just a few percent variation in cloud cover results in a change in the earth’s irradiative energy budget equal to the projected amount of the warming the IPCC claims that anthropogenic CO2 causes.” [TNS, Kindle loc.720] Recognition of the possible importance of the 11-year cycle is only the first step. The sun has several other cycles, and the possibilities are tantalizing. There are cycles of 22, 87, 210, 1000, and 2300 years. [loc. 1128] There is 20-year cycle that occurs when Jupiter and Saturn align and exert a gravitational pull on the sun, and 60-year cycle when the two planets are closest to the sun. “It can be shown that the 60-year gravitational cycle peaked during 1880-1, 1940-1, and 2000-1, as did global surface temperature.” [loc. 3403] On a grander scale, the earth’s orbit around the sun varies between an ellipse and a circle, and its axis changes over time, creating Milankovitch cycles, measured in tens or hundreds of thousands of years,. Their effect is to change the irradiation of the earth with magnitudes of “single to low doubledigit percentages.” [loc. 1560] The cycles play a leading role in ice ages. Finally, there is a suggestion of a 140 million year cycle, based on the fact that the solar system travels across the spiral arms of the Milky Way on this schedule, and the result is an increase in cosmic rays, and thus a cooling effect. The geologic record supports a cycle of this duration. The theories about the effect of the sun are incomplete and often tentative, but they can be checked against the geologic record, and against current climate patterns, and they are finding consistencies. The CO2 Establishment is not giving up its paradigm without a fight. But in a fundamental sense, it has already lost because its claims are untenable, especially the meme that “the science is settled.” No one in the sun camp would deny that CO2 is a GHG or that it has some effect on climate – it is the degree of that effect that is disputed. On the other side, the IPCC’s 1552-page report on the state of climate change physical science dismisses the possibility that solar variations or cosmic rays have an effect on 21st century climate. It mentions Milankovitch cycles only twice, accepting that they have a role in ice ages and millennium-scale changes, but discussion immediately turns back to why CO2 is what really counts. This stance defies logic. For one thing, given that the sun is the prime mover of everything related to earth’s climate, how can an effect ever be assumed away? For another, if solar effects produce millennium-scale changes then they must also affect the short-term, given that any long-term is a series of short-terms. So if one accepts that solar changes are connected to the Medieval Warm Period and the Little Ice Age, then how can one argue that the rise in temperature since 1850 – the recovery from the last cold period – is not also connected to solar effects? The argument that the sun is relevant only in the long term looks like a jury-rigged patch to shore up a failing paradigm. As Professor Richard Lindzen told the UK Parliament in 2102: Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating. Particularly absurd is the claim that the science is settled. As George Will said, when someone claims that a debate is over, “you may be sure of two things: the debate is raging, and he’s losing it.” Bleeding industrial civilization to death in the name of CO2 is scientific quackery, worthy of Steve Martin’s Theodoric of York, Medieval Barber. So let the partisans of the competing paradigms duel. And put the CPP on hold. READING: Fritz Vahrenhalt & Sebastian Luning, The Neglected Sun: Why the Sun Precludes Climate Catastrophe (2013). Willie Soon & Sebastian Luning, Solar Forcing of Climate (Chapter 3 in Nongovernmental International Panel on Climate Change (NIPCC), Climate Change Reconsidered II: Physical Science (2013)). Intergovernmental Panel on Climate Change (IPCC), Climate Change 2013: The Physical Science Basis, Chapters 7 & 8. National Research Council, The Effects of Solar Variability on Earth's Climate: A Workshop Report (2012). Judith Curry, Climate Dialogue: influence of the sun on climate, Climate, Etc. (blog), Oct. 27, 2014 (see also posts linked at end).
95a66c0bb5acc862bc94db669cbc907c
https://www.forbes.com/sites/jvdelong/2015/11/19/syria-who-is-a-refugee/
The Legal Definition Of A Refugee, Which Obama Pays No Attention To
The Legal Definition Of A Refugee, Which Obama Pays No Attention To A puzzlement about the debate over accepting 10,000 Syrian refugees next year and more in the future is the lack of discussion of a fundamental point: Does Obama have the legal authority to order their admission to the U.S. as a humanitarian measure? The answer is “no.” The dictionary definition of a “refugee” is “a person who flees for refuge or safety, especially to a foreign country, as in time of political upheaval, war, etc.” This definition underlies most of the media discussions of the Syrian situation, with its emphasis on the humanitarian crisis, which is indeed horrendous. The definition also underlies the President’s uncontested authority to provide humanitarian assistance to refugees outside of the United States if he believes that such assistance will “contribute to the foreign policy interests of the United States.” [22 U.S.C. sec 2601(b)(2)] The U.S. has already spent over $4 billion on Syrian relief under this authority.for this purpose. However, the meaning of “refugee” in U.S. immigration law is narrower than this dictionary definition. In immigration law, for purposes of admitting someone to the U.S., the crucial factor is whether a person has a legitimate fear of persecution, not whether a humanitarian crisis exists. By statute [8 U.S.C. Sec.1101(42)], a “refugee” is: “any person who is outside any country of such person’s nationality . . . and who is unable or unwilling to return to . . . that country because of persecution or a well-founded fear of persecution on account of race, religion, nationality, membership in a particular social group, or political opinion...” The statute then stretches this definition to include a person who is within his own country but who has the requisite fear of persecution. But the status of "refugee" can be granted only under “special circumstances" specified by the president. And before determining that special circumstances exist, the president must "consult," in the form of in-person discussions between cabinet rank officials and members of the House and Senate Judiciary committees concerning all aspects of the situation. No agreement is necessary; just consultation [8 U.S.C. Sec. 1157(e)]. Section 1157 also provides for caps on the number of refugees admitted each year, and for presidential estimates of the likely numbers at the beginning of each year. Nothing in the stretched definition changes the basic requirement that a refugee be someone who has well-founded fear of persecution. The current controversy started on September 10, when the administration announced via press briefing a plan to admit 10,000 Syrian refugees next year. The next step was a formal Presidential Determination on refugee levels for FY2016, which projected admission of 85,000 total. The word “Syria” does not appear in the Determination, and the goal of resettling 10,000 Syrians appears only in news reports and briefings, such as a WhiteHouse.gov memo by DHS on How We're Welcoming Syrian Refugees While Ensuring Our Safety. Neither the press briefing nor the Presidential Determination nor the DHS memo mentions the statutory criterion of fear of persecution, and it is unclear why 10,000 Syrians will meet the standard. The State Department’s Report to Congress reviewing the section 1157(e) factors and explaining the reasoning behind the estimates does not explain why Syrian refugees meet the criterion. Rather clearly, the international community wants the U.S. to take in more Syrians and the President is determined to respond favorably. Considerations of legality are not terribly relevant, so the administration did not really consider whether its desired action fit within the terms of the law. No sensible person can believe that Obama actually cares about the “persecution” limitation, especially in the light of his furious reaction to suggestions that Mid-East Christians, who are in fact persecuted, should be favored. The next issue is whether this lack of attention to legality has any practical impact, or is just another on a long list of examples of Obama’s disdain for the Rule of Law. Realistically, it will probably be just be another on the list. Once the lawyers get hold of the program, they will force the administration to say that of course it will comply with the statutory criterion requiring fear of persecution. Then, in practice, the vetters will be told to ask each candidate whether he/she is in fear of persecution, check off “yes,” and move on, content that the formalities have been observed. There could be one major impact, however: on the position of the recalcitrant state governors, 31 of whom are opposing the program. Administration allies, such as New York’s Governor Cuomo, scoff, pointing out that the federal government controls immigration and that governors have no say. But if the feds try to push immigrants into the objecting states, the legal situation gets complicated, and the governors might succeed with an argument that they can resist cooperating with a program that exceeds presidential power. The Fifth Circuit, in the recent Texas v United States, upheld a lower court injunction against the administration’s effort to rewrite significant portions of the immigration laws. The court rejected government claims that the executive has broad discretion to interpret the law, concluding that Congress has spoken at length and clearly on immigration matters, and has left the executive with little maneuvering room. The court also used presidential statements to the effect that he was changing the law as evidence that the action exceeded the President’s powers. A similar argument can be made with respect to the Syrian issue. A court of appeals might use the administration’s public statements about the humanitarian basis of its actions as evidence that the law is in fact being ignored, and to side with a state’s contention that the program is illegal, especially if instructions issued to vetters confirm that the persecution requirement is not actually to be applied. The administration might also argue that a state lacks the standing to bring such a case, but this too is pretty much disposed of by Texas v. U.S., which found standing existed in the form of financial harm to Texas. In the end, it seems improbable that the matter would ever reach the stage of a formal legal determination. While the matter was pending in court, the program can roll forward with resettlement limited to consenting states. In the political battles, though, the argument that the president is acting illegally is a powerful, and one would expect the resisting governors to use it. As Andrew McCarthy said in National Review, "[T]he principal constitutional duty of the chief executive is to execute the laws faithfully. President Obama, by contrast, sees his principal task as imposing his post-American 'progressive' preferences, regardless of what the laws mandate." It is also clear that providing effective help to Syrians is low on the president's priority list. According to the State Department Report to Congress, "Inside Syria, more than 12.2 million Syrians require humanitarian assistance, 7.6 million are internally displaced and 5.6 million children are in need." An analysis from the the Center for Immigration Studies notes that the annual cost of resettling a refugee in the U.S. is about $13,000, whereas the UN spends $1,057 annually per refugee in the nations neighboring Syria. This is a very bad trade-off, on the scale of net benefits--12 refugees could be helped in the Mid-East for the cost of bringing one to the U.S. So what is the point, unless it is to signal our own (or our president's virtue), even at the expense of the supposed beneficiaries?
0c9b194bc9c673ba4abbff12f21f95e4
https://www.forbes.com/sites/jwebb/2015/12/23/what-is-category-management/
What Is Category Management?
What Is Category Management? The concept of category management is familiar to those in retail, who look to manage clusters of items within a shop environment, but it also a way for companies to buy more effectively and to save significant sums of money in procurement. At its most basic level, category management is about bundling items. Buyers look for items purchased across the company and consolidate disparate agreements into a single contract (and price). A category is essentially any group of similar items which the company wishes to buy under a single deal. The management part is about applying procurement methodologies to ensure the firm maximizes savings. The central driver behind category management is to simplify demand and take a bigger contract to the market. The greater the scale, the lower the unit price. The best way to understand it is through an example. Imagine a company with 40 factories, spread across the world. Each has a general manager responsible for negotiating her own deals for supply. Let’s take paperclips. Each GM spends time trying to source stationery which, although not essential to the factory, subsumes time in sourcing. Suppliers know that the GMs are busy and looking to close a quick deal. Vendors can take advantage of this and push for a higher price in return for a fast turn-around. They can repeat this trick 40 times, as GMs across the companies do not waste time discussing the price of paperclips. The result is a messy hotchpotch of agreements, each of which are only known to only one person in the organization. Imagine the wide range of prices, payment terms, quality, contractual length and delivery schedules. This supplier strategy is called ‘divide and rule’. Suppose now that we employ a buyer to consolidate these disparate deals into a single, global contract. If you want to sell paperclips to the company, you have to go through one person, who arranges a single price and distributes the stationery to the factories. GMs are liberated from a tedious task and the company saves time and money. The savings for the trivial example of office stationery are clear. If we extrapolate this case to more important business activities, such as industrial products or technology, the savings available to organisations are huge. This is category management. The process of clustering and centralizing similar goods into bigger contracts which are easier to administer and lowers prices. So how do we manage these categories? Procurement Leaders charts seven steps within category management. Opportunity identification This is a set of internal tasks. It relates to understanding the business and its needs. For this, buyers must ask themselves: How much are we spending? Who are the key stakeholders? Opportunity development This is more externally minded stage where buyers seek to understand the capabilities of the supplier market and seek to identify means to address internal needs. Finalize strategy Uniting internal need and external capability, buyers must determine which sourcing methods will best position the company in the market. Screen suppliers From the supplier base, we can identify which actors within the market possess the needed capabilities. Buyers can also consider reaching out to companies with whom they don’t have a direct relationship: answers may come from unlikely sources. Conduct auctions and RFPs Buyers must devise the best means for suppliers to express interest in servicing the contract and compose business proposals to meet that need. Shape and negotiate proposals Once bids have been received, the company has an opportunity to further shape the offering to add more value. This is a key step where a category manager can contribute to the process, through mobilizing knowledge of the business and identifying Implement and manage suppliers Once suppliers have been brought in and onboarded, it’s up to procurement to ensure that contractual terms are fulfilled and suppliers continue to add value in the relationship. Also, the need for continued risk management and due diligence will be required in more risky or business critical relationships. It’s a simple idea, but it is rarely implemented in full. The main obstacle is internal. Within Procurement Leaders, we often find that buyers – or category mangers – struggle to secure acceptance among their colleagues. The maturity of category management in procurement organizations [Source: Procurement Leaders] Simply put, procurement is not trusted outside its own department. And buyers rarely possess the communication skills to persuade their colleagues of the benefit of category management. As such, many organizations fail to successfully apply category management. This can be said of even surprisingly sophisticated companies. Yet, the benefits, for the those willing to genuinely change and follow these simple steps, are significant.
9f46bca2e87cd0a9dafe3b78dd915708
https://www.forbes.com/sites/jwebb/2016/01/25/forgot-emotional-intelligence-leaders-should-understand-emotional-labor/
Forget Emotional Intelligence, Leaders Should Understand Emotional Labor
Forget Emotional Intelligence, Leaders Should Understand Emotional Labor An employee holds a poster reading 'we are no robots' outside a center of US online retail giant... [+] Amazon in Leipzig, eastern Germany, as its sites in Germany were hit by fresh walkouts in a long-running pay dispute. AFP PHOTO / DPA / PETER ENDIG Is building a company culture brainwashing? How much do we ask of our staff when we ask them to work for us? Amazon's staff in Germany, for instance, have long decried their treatment as 'robots' at the hands of the global retail giant. Understanding our own emotions and how they influence others has been a major focus for management gurus over the last decades. But a new set of thinkers are beginning to look at ‘emotional labor’, that is work which is not spelled out in a contract, but defines our feelings at the office. The term was coined by Arlie Hochschild, a sociologist who investigated into the emotional lives of customer service workers. Hochschild defines emotional work as “the management of feelings to create bodily and facial displays compliant with social requirements. Emotional labor has an exchange value, since it is paid wages for.” Within the supply chain management context, the display of anger in a negotiation context is often enough to guilt a supplier to back-down. It’s not the depth of feeling here, as even faking it can achieve the desired results. Emotional labor is often seen in a customer-facing role, where staff are expected to present a professional and courteous demeanor regardless as to the behavior of the client. Call-center workers often face rude and aggressive callers. I can only imagine that those manning corporate chat-lines to help customers can only expect even worse abuse from angry people, unencumbered by the restraints of politeness that comes from direct interpersonal contact. Stress emerges from our desire to fight or flee a tough situation, where an individual cannot act upon these desires, they feel trapped.  Recipients of abuse are not only deeply unhappy, but often feel the need to uncork their bottled frustrations on others. A famous example comes from the world of Disney. The constant demands of apparent gaiety and joy saps the esteem of workers at theme parks. Those responsible for loading thrill-seekers onto roller-coasters would occasionally employ the ‘squeeze’. Little children, usually of the louder variety, would have their breath taken away by an excessive tightening of the safety straps, squeezing the air out of any uproarious brat before they have chance to complain before the carts whizz around the track. Parents unable to control their unruly offspring may endure similar punishment. This is perhaps an extreme example, but a workplace more loaded with emotional expectation may begin to face similar issues to those working in call centers. Alongside the petty subversion above, employees wage bitter campaigns of bad-mouthing their bosses and establishing a shared community of disengaged individuals, who feel their employers have stripped them of their own emotional identity. It is now arguable that the corporate world has already become a domain in which employees are expected to emote in a predefined manner. In fact, it seems as though emotional content trumps aspects which were previously considered an exclusively analytical domain. I attended a conference recently, and I was struck by a speaker’s introduction of himself. “I love data,” he told the audience, “and I have loved working with data for the past 15 years.”  In the past, it seems to me, an expert would emphasis experience, qualification and intellect, but the current generation is only won over by a sincere expression of deep feeling, even in the number-crunching realm of technology. Passion is louder than performance; competency replaced by love. This is often known as ‘company culture’. The New York Times recently reported on Amazon’s working practices that encouraged  employees to become highly aggressive. Further, only last month, Amazon employees in Germany went out on strike against the 'excessive pressure' and 'rigid controls' that the company placed upon its staff. The company boasts its ‘Amazon Way’, which encourages healthy competition, but critics argue that this is window-dressing a surreptitious strategy to pacify its struggling workforce to accept increasingly unreasonable workloads. Worse still, this bubble acts as the only acceptable reality by which employees perceive their work and themselves. To challenge this perception is to demonstrate to management a bad attitude and may lead to a speedy exit. Undoubtedly, becoming an emotionally intelligent employer is vital to success and encouraging workers and teams to take responsibility for their moods and channel these to productive use is key. However, those aiming to build cultures, codes or company values need to tread carefully how their instructions relate to contracted expectations of duties or may cross over to an individual’s inner realm. Should individuals feel as though feelings are mandated by senior management, bitterness may emerge and they may take out this frustration upon their colleagues or even customers. Appropriate expression and suppression of emotions are essential to all roles – imagine lawyers who shriek and wail at each of their client’s mishaps. Yet, just as emotions are central to being human, they cannot be wished away or ‘corrected’ even by the most persuasive of managers. Leaders must acknowledge the points at which they are evoking their juniors to feel in a particular manner, and when they are asking them to suppress emotions. At both asks, the employer is engaging in emotional work. It is as this point that most companies fail: They simply do not know what they ask of their staff. Understanding these labors, no matter how small or irrelevant they may seem to managers, is the key to unlocking an emotionally productive workforce. Failure to acknowledge may result in the negative PR that Amazon is currently suffering, or, lose key clients.
990b1ae6ba0f0067b16636b6f58e26c0
https://www.forbes.com/sites/jwebb/2016/10/27/85-of-managers-resort-to-bribery-when-trading-in-developing-economies/
85% Of Managers Resort To Bribery When Trading In Developing Economies
85% Of Managers Resort To Bribery When Trading In Developing Economies Shutterstock Research produced from the Henley Business School shows that managers are willing to pay bribes when trading overseas. The study interviewed over 900 business executives during a period of 12 years. Although the research focused upon British managers, it also covered businesspersons from China, Ireland, Germany and Russia. Andrew Kakabadse, professor of governance and leadership at Henley Business School, noted that: ‘These suspect business practices are typically committed by concerned managers who feel that they have no alternative other than to pull out of the country in question.’ That is to say businesses trading in the emerging economies feel as though they face the dilemma of either paying bribes or ceasing trading in that market. Within certain countries, bribery is considered an accepted or even necessary component of doing business. For many companies, this is simply part of ‘local custom’. These attitudes have been challenged by recent legal developments. In 2012, the UK launched its Bribery Act, commonly considered one of the most stringent anti-corruption laws in the world. It became an offence under English law for a company to pay a bribe (or fail to prevent a bribe’s payment) anywhere in the world, as long as that firm traded in the UK. Although many anti-corruption campaigners welcomed the new law’s introduction, businesses decried its potentially damaging impacts. One corporate lawyer complained to me that it was an act of ‘cultural imperialism’. The implication here is that the new law trampled over the local practices of other countries and denied large businesses from competing on a level playing field when trading in emerging economies. Companies from many countries are notoriously malleable in adapting to ‘local custom’. Singapore, for instance, which is commonly regarded as the least corrupt country in Asia, is often subject to fierce criticism for overlooking the questionable practices of its businesses abroad. Siemens, a manufacturer originating from the fair-trading German economy, has faced penalties under the US Foreign Corrupt Practice Act (FCPA) exceeding $800 million. This underlines the view that corruption is not merely a problem of the developing world. Businesses from the West are supporting the local practices of corruption by readily paying bribes. Transparency International, the corruption research organization, regularly commissions a ranking of the countries whose companies are most likely to pay bribes when trading abroad. This study is called the Bribe Payers Index (BPI). Transparency International Bribe Payers Index Rank Country Score (/10) 1 Netherlands 8.8 1 Switzerland 8.8 3 Belgium 8.7 4 Germany 8.6 4 Japan 8.6 6 Australia 8.5 6 Canada 8.5 8 Singapore 8.3 8 UK 8.3 10 USA 8.1 Source: Transparency International, 2011 Interestingly, it is often the firms originating from the least corrupt countries that are most willing to pay bribes abroad. Of the 28 states included in the most recent study, the Netherlands and Switzerland score the worst. By contrast, in Transparency International’s Corruption Perception’s Index – a separate list of 167 countries ranked by the absence of bribery - the Netherlands is ranked 5th and Switzerland 7th. The message being that companies behave well at home, but badly abroad. The USA is positioned at the 10th spot on the BPI and 16th on the CPI. US-based companies have also faced penalization under the Foreign Corrupt Practice Act, most recently from hedge fund Och-Zich, who paid $412 million in fines earlier this year. The payment of bribes in emerging economies from firms headquartered in the West helps buttress the system of corruption that prevents poor countries from developing. If attitudes such as those expressed by the executives in the research by Henley Business School continue, the problem of corruption hindering development will persist. A more immediate concern for an individual business is that it potentially faces severe legal sanction under the FCPA or the UK Bribery Act.
0737754a3364beef0515dce27c0bd1bc
https://www.forbes.com/sites/jwebb/2016/11/30/amnesty-international-slams-colgate-nestle-and-unilever-for-palm-oil-supply-chain-abuses/
Amnesty International Slams Colgate, Nestlé and Unilever For Palm Oil Supply Chain Abuses
Amnesty International Slams Colgate, Nestlé and Unilever For Palm Oil Supply Chain Abuses Shutterstock Today Amnesty International has published a damning new report into the practices of major consumer goods multinationals. The human rights NGO unpicks the palm oil supply chain and finds evidence of forced labor, child employment and dangerous working conditions within the palm oil supply chain. Although the company primarily under investigation is Wilmar, the world’s largest palm oil producer, it is the brand names that this firm suppliers that faces the charity’s opprobrium. Colgate, Nestlé and Unilever all come under heavy criticism for allowing conditions to emerge in their supply chains that many would regard as shocking. Amnesty International interviewed 120 workers within Wilmar’s plantations, as well as digging deeper into their  suppliers in Indonesia. "Corporate giants like Colgate, Nestlé and Unilever assure consumers that their products use 'sustainable palm oil', but our findings reveal that the palm oil is anything but," noted Meghna Abraham, Senior Investigator at Amnesty International. "Companies are turning a blind eye to exploitation of workers in their supply chain. Despite promising customers that there will be no exploitation in their palm oil supply chains, big brands continue to profit from appalling abuses." Palm oil is a highly versatile product that is estimated to be in half of all consumer products, ranging from toothpaste to shampoo. It is mostly produced in Indonesia, which services over half of global demand. The palm oil sector is rife with corporate social responsibility issues and is linked to deforestation, where its land-intensive farms denude the Indonesian jungle and deny rare species, such as orangutans, of habitation. It also is an area of alleged worker exploitation. The Amnesty International report describes a punishing work regime with demanding performance targets. Failure to meet objectives can yield financial deductions. Penalties are levied at the manager’s discretion. Many laborers, the reporters find, feel compelled to work 10-11 hour-long days, accumulating to exceed the legal maximum of 40-hours per week in Indonesia. Despite this grueling schedule many claim they are paid beneath the legal minimum wage. The report finds that, such are the pressures under which workers are placed, they enlist their spouses and children to toil unpaid to avoid penalties from the employer. The charity found children as young as eight in employment, many of whom dropped out of school to meet their quota. One worker was quoted as saying: ““I get the premi [bonus] from the loose fruit that’s why my kids help me. I wouldn’t be able to meet the target … otherwise. … The foreman sees my children helping me. The foreman says it is good that my child is helping me.” Indonesia bans child labor. Amnesty International also finds evidence of using paraquat, highly dangerous herbicide.  The chemical is banned in the European Union and Wilmar itself has made commitments to phase out its use. The report finds that suppliers are still routinely making use of the chemical. The investigators  found one instance of a worker that was splashed in the face by the chemical, leading to severe injuries.  “I can’t see through the eye. I get headaches in part of my head, when I do, my eye feels really swollen. I still get a bit dizzy”. These allegations are obviously serious and, if true, highly damaging to the brands concerned. In a statement to Fairfax Media, Wilmar acknowledge the report’s findings, and urged many within the industry to help combat these issues. Unilever also welcomed the report, agreeing that more needs to be done in the sector. Nestle also expressed concern and vowed to challenge Wilmar with the presented evidence. Colgate-Palmolive stated that they “prohibit discrimination and child or forced labor”. The companies concerned are signatories to the Roundtable on Sustainable Palm Oil, a declared commitment to use sustainable sources of the commodity.  Despite this, and their own independent policies, these major organizations have failed to spot abuses within their own supply chains. Time and again, we can see that campaigning bodies, or even journalists, out-perform auditors in uncovering abuse in supply chains. There are some advantages that the smaller bodies possess: they are nimble, more focused and, even though they are significantly smaller than their target, can commit more resources to the investigation. Large multinationals, who commonly manage over 100,000 suppliers, are overwhelmed by the scale of their own supply chains. Third-parties can hone in on one supplier and subject it to a level of scrutiny the corporation could never match. This is the essential weakness of large companies. As multinationals' supply chains have bloated to ungovernable levels, they will be continually vulnerable accusations of abuse invisible to the corporation. Unless they commit the resources to match the NGOs and journalists, big companies will be easy prey for some time to come.
d6575a0c0720f74e78fc1bd4deac65b5
https://www.forbes.com/sites/jwebb/2016/12/01/the-good-are-getting-better-and-the-bad-are-getting-worse-global-bribery-risk-grows-more-uncertain/
The Good Are Getting Better And the Bad Are Getting Worse: Global Bribery Risk Grows More Uncertain
The Good Are Getting Better And the Bad Are Getting Worse: Global Bribery Risk Grows More Uncertain Shutterstock A new study into the risks faced by businesses across the world has found that the threat of bribery is polarizing. Since 2014, more states are categorised as ‘extremely low risk’, but the number of ‘extremely high risk’ countries has also grown according to TRACE International, an anti-bribery organization. In the TRACE Matrix, Sweden and New Zealand are considered the countries subject to the lowest levels of risk. Whereas, at the other end of the spectrum, the oil-rich states of Nigeria and Angola are ranked bottom. The United States fails to make the top ten and is ranked at the 20th position – a fall of 10 places since 2014. TRACE Matrix 2016 Rank Country Risk score 1 Sweden 10 2 New Zealand 15 3 Estonia 17 4 Hong Kong 17 5 Norway 19 6 Ireland 22 7 Netherlands 24 8 Singapore 25 9 Finland 26 10 Denmark 27 11 Japan 27 12 Canada 28 13 Georgia 28 14 Switzerland 29 15 UK 31 20 USA 34 In the study’s previous iteration found that only a single country was categorized as ‘extremely low risk’ this has now increased to five. Two years ago, 15 states were ‘extremely high risk’, in 2016 the number stands at 19. This increasingly dynamic situation is creating for more uncertain trading conditions. Yet, it is often businesses that are leading the charge to clean up economies. “Numbers aside, we feel that bribery risk, generally speaking, is decreasing across the globe due to increased enforcement of anti-bribery regulations, more awareness of the damaging effects of corruption on economic and social development and a seismic shift in the business community over the last decade in how they view bribery and corruption,” says TRACE President Alexandra Wrage. “Compliance is no longer driven solely by enforcement. Most companies now agree that bribery is a bad business strategy and recognize the negative effects of bribery on corporate performance and reputation.” The study, covering 199 countries, measures bribery risks. This refers to companies’ exposure to corruption in their exchanges with government. Data is collected in a range of domains, looking at business-government interactions, laws, requirements for financial transparency and civil service independence. On this list, are two surprising countries. Firstly, Estonia is placed third, a significant jump from its position at 22nd two years ago. This elevation to the top three reflects partly its increasing incorporation into the Scandinavian orbit and out of its Soviet past, but perhaps more pertinently is the country’s rapid digitalization of its governmental programs. Estonia has far outperformed its neighbors. Latvia occupies the 22nd sport and Lithuania is ranked 25th.  Russia, by contract, is way down at 94th. Since 2014, the small Baltic state has shifted many bureaucratic processes onto the internet. In Estonia’s e-government portal, it is possible to use digital signatures, obtain medical prescriptions and even vote online. The government calculates that the efficiencies brought about these digital initiatives has saved the country 2% of annual GDP. The other benefit of conducting transactions online is that they bring total transparency and traceability. Corruption feed on secret off-books deals and denying this through an open, electronic platform goes a long way to solve corruption issues. The other surprising entry on this list is Georgia. This small country in the Caucuses has had a difficult journey since its independence from the USSR in 1991. In 2003, the country was convulsed by revolution as its president was deposed.  In 2008 is suffered an invasion from Russian troops, as the Kremlin moved to seize the rebellious regions of Abkhazia and South Ossetia. Since then, its leaders aimed to strike a more stable path as it sought to reform its police, public services and public procurement practices. Georgia’s government forged a path to creating a clean business environment and fighting corruption was a primary objective. It appears that these efforts are proving successful in independent assessments. The TRACE Matrix is a means to identify bribery risks that companies may face in their interactions with the overseas governments. This is a somewhat unusual take compared to other corruption indices. Transparency International’s ranking of states by their degree of perceived corruption is more circumspect regarding the two above countries. Of 168 countries, Estonia is ranked 22nd and Georgia 48th. In the World Bank’s 2017 Ease of Doing Business Index, Estonia is 12th and Georgia 12th, closer to TRACE’s evaluation. So it appears that these more business-focused indicators are picking up trends that the more general corruption perceptions rankings are not picking up. In making business decisions about partnering in a high risk country, be it a sourcing or a sales collaboration, managers may look to indices such as the TRACE Matrix to provide a more accurate gauge of the likeliness that a firm may be a victim to bribes and extortions demanded by a public official. Given that the conditions of economies are polarizing between more extreme states of highly clean and highly corrupt, businesses need to understand the particularities of the markets in which they operate. Reports like this can provide decision-makers with powerful data regarding the exact bribery risks to which their businesses are exposed.
b4bc4675d76457fa60060a054b202144
https://www.forbes.com/sites/jwebb/2017/01/03/the-new-silk-road-china-launches-beijing-london-freight-train-route/
The New Silk Road: China Launches Beijing-London Freight Train Route
The New Silk Road: China Launches Beijing-London Freight Train Route A general view of the first China Railway Express, a new railway line from China to Europe during... [+] the inauguration by visiting Chinese President Xi Jinping in Warsaw, Poland, on the sideline of the International Forum on the New Silk Road, Monday, 20 June 2016. The visit is intended to boost China's infrastructure investments in Europe, and opening China's market to Poland's foods. (AP Photo/Czarek Sokolowski) On Sunday, the Chinese government launched a rail freight service between China and London. This is the first direct rail link between China and Great Britain. The route of the service will traverse from Beijing, across Asia and Europe, before terminating in London. The route is actually not new at all. It is part of the old Silk Road, which commenced in 200 BC, through which Chinese silk caravans carried wears to Europe and Africa. The trail provided much wealth and prestige for the Chinese Empire of the day. Now, Beijing is aiming to resurrect this historic trade route by using rail power. The journey is as much an engineering challenge as a logistical problem. Freight must swap trains along the way, as railway gauges vary between the connecting countries. In its 18-day journey, freight will span 7,456 miles of railways, crossing Kazakhstan, Russia, Belarus, Poland, Germany, Belgium, France and the UK. The new route unlocks a new option for shippers. Currently, the choice is two-fold. One, take an ocean-bound route, which, although cheap, can be slow. Two, use an air carrier that is considerably faster, but much more expensive. A direct rail link between Beijing and Western Europe enables manufacturers to explore new means to lower transport costs. The line may not provide a suitable alternative to all producers, but canny negotiators can leverage the new market entrant to lower prices of their established pathways by boat or plane. The London link expands Chinese growing portfolio of rail connections. There are presently 39 lines that connect 12 European cities with 16 Chinese cities. The move is part of China’s new ‘One Belt, One Road’ strategy, launched by president Xi Jinping in 2013. The initiative aims to improve links between the Beijing and its neighbors within Eurasia. Logistical linkages are a key means to embed co-operation and trade relationships between countries. The first freight train as a part of this strategy linked China to Tehran. This connection links Iran with the central Asian countries of Kazakhstan, Turkmenistan and ultimately China. Many analysts believe that an expanded Chinese economic role within central Asia will also enhance its political influence over an increasingly important global region. In addition, Central Asian countries, such as Kazakhstan, may see a boon in their own economic performance. Historically, such countries have struggled to connect to the global economy. The new line may provide easy access to wealthy importer markets. The announcement is also well-timed for the British. The government in London is currently scouring the world for trade deals, in anticipation of a departure from the European Union. China's economy is high on the list of prospective partners as the UK aims to open new trading deals unfettered by EU restrictions. Economists look to infrastructural projects as a spur to economic development. China is the past-master at such engineering initiatives. It has transformed its economy through a series of well-planned transport and manufacturing investments. Should this new line prove successful, we can expect to see a similar resurgence of economic activity along the ancient Silk Road.
301867428c949757260edd943bb25fc7
https://www.forbes.com/sites/jwebb/2017/01/16/supply-chain-audits-work-for-corporations-but-not-the-planet-says-new-report/
Supply Chain Audits Work for Corporations, But Not The Planet, Says New Report
Supply Chain Audits Work for Corporations, But Not The Planet, Says New Report Shutterstock A new report argues that supply chain audits are ineffective at improving compliance and act merely to embed an unhealthy status quo in multinational offshore sourcing. The Sheffield Political Economy Research Institute (SPERI) based at the University of Sheffield, in the UK, conducted a series of interviews with experts and practitioners. In one interview, a former director of CSR at a US retailer, painted a bleak picture of the sector: “Within the social compliance world, it is now standard operating understanding that audits don’t work to achieve change within organizations”. The authors claim that supply chain audits are “ineffective tools for detecting, reporting, or correcting environmental and labor problems in supply chains.” The supplier audit industry, the report argues, “is ‘working’ for corporations, but failing workers and the planet” For example, in 2013, just months before the collapse of Bangladesh’s Rana Plaza factory, in which hundreds of workers lost their lives, the facility passed a compliance audit. The industry that ‘certifies’ suppliers is frequently subject to criticism. That auditors are paid by those that they are certifying creates an inherent conflict of interest, many argue. The primary objective of audits is to provide information. Corrective action, if it is to succeed, can only be effective if is grounded on facts. But, the study contends that the “information audits provide is selective and fundamentally shaped by the client”. That is, the framework under which auditors operate is pre-defined to minimize the likelihood of uncovering wrong-doing. Suppliers below the first tier, for example, are rarely audited, despite their more numerous representation of the supply base and their greater probability of non-compliance. As such, audits, taken as a whole, provide an over-optimistic picture of supply chain compliance levels. They do not so much as identify weaknesses, the authors contend, but confirm the status quo. The increasing use of audits as a tool of governance is bolstering corporate interests and influence over consumers and policymakers and, ultimately, deepens corporations’ power to make their own rules and norms and evaluate and report on their own performance.” If auditors are as ineffective as this report suggests, it may explain some of the recent revelations made by investigative journalists. In October of last year, reporters traced evidence of child labor in the Turkish textile market, which services high street retailers across Europe. Undercover journalists also exposed practices of illegal employment in Australian fruit farms in November 2017. The report is hard-hitting and perhaps unfair on those working in the supply chains of multinationals who are earnest in their ambition to improve the lot of workers and the environment. In my experience, many are proud of their work and believe they are making real progress. They would be horrified by the suggestion that supplier audits are cynically abused by myopic corporations that only look for profits. There are other factors that may explain the imperfect record of supplier audits about which the report does not directly engage. Multiple audits standards, for example, undermines the industry’s effectiveness. Selling to a corporation is often accompanied by a range of demanding specifications regarding quality, production methods and delivery timetables, alongside sustainability compliance. These requirements can often be unique to a single multinational. Consequently, suppliers can find themselves in a situation where they duplicate production lines as they try to meet disparate and contradictory standards. Another factor that may diminish the power of audits is supplier deception - where companies deliberately deceive both the auditors and the sourcing company. SEPRI notes that “deception in the audit regime is widespread and known to corporations” but blames the multinationals, who are also arguably victims to the lie. But supplier deception is hard to detect. Even one of the interviewees in the study noted that suppliers would “drill their people on what they need to say” prior to audits. How can you uncover non-compliance when it is concealed by ruse? And, even knowing that audits are subject to such games, is this sufficient reason to abandon them in entirety? Although multinationals appear flush with wealth, these resources are often spread thinly. It is not uncommon for a major corporation to source from over 100,000 suppliers at the tier 1 level. However, companies are keen to create an impression to the investor community of unbridled success. Observe the regular ceremony around the recent record-beating multi-million dollar quarterly results that projects an image of effortless prosperity. Yet, in the battle for resources, the rich multinationals, strangely enough, will always be out-financed by cash-strapped investigative journalists that have the staying power to stick around with a supplier for as long as it takes to unearth a scoop to embarrass the corporation. Auditors, by contrast, must divide their limited time to cover 100,000 suppliers. They have tough targets under time pressure and suppliers can exploit this divided attention to hide skeletons. To address this challenge, auditors can emulate journalists. Investigators may consider spot-testing fewer suppliers deep in the supply chain and engage in the tactics used by journalists to ensure compliance. This can involve following logistics routes, visiting sites unannounced or using other creative measures to reach the truth. All contracts should be written to allow for these techniques. SPERI goes further and calls for the supply chain audit industry to move from the private sector to public governance. Self-policing, the authors contend, has failed and now radical remedies are required. There are challenges to this proposal. The cross-national nature of supply chains has created patchy governmental regulation. No one is sure who has legislative authority. Moreover, in an era of declining international cooperation and even protectionism, this may not be the time to introduce a new multi-national regulating body. Perhaps a greater degree of transparency within the industry may provide a move in the right direction. Opening results for public scrutiny and admitting the limitations of the methodology may ensure that audits are more effective. If the industry does not look to reform itself, perhaps governments will take calls for more stringent legislation more seriously and look to forcibly expose the supply chain to public scrutiny.
3e4425f9e037da47ac538bc0c350d3ed
https://www.forbes.com/sites/jwebb/2017/02/28/how-to-build-a-local-supply-chain-six-tips-to-survive-in-a-protectionist-world/?sh=370089902f9b
How To Build A Local Supply Chain: Six Tips To Survive In A Protectionist World
How To Build A Local Supply Chain: Six Tips To Survive In A Protectionist World U.S. President Donald Trump speaks during the National Governors Association meeting in the State... [+] Dining Room of the White House in Washington, D.C., on Monday, Feb. 27, 2017. Photographer: Aude Guerrucci/Pool via Bloomberg Donald Trump’s assaults on big corporate companies and their reliance on Mexican manufacturing has caused businesses to revisit their supply chain plans. For decades, companies have been steadily offshoring and outsourcing production. This has led to a heavy dependence on foreign manufacturing and stretched supply chains across the globe. It appears that the world may be entering a new phase of protectionism. If countries start raising tariff barriers and penalizing transnational corporations, the global supply chain model may grow untenable. With Trump in the US, Brexit in the UK and the threat of anti-globalist forces surging to power in France and the Netherlands, the pressure is on companies to shorten their supply chains. With this new era in mind, I’ve put together a six tips that may allow companies to consider their own exposure to the threat of protectionism and prepare for measures that may allow for the rapid establishment of a local supply chain. Analyze the local market The first point is to understand the capability of suppliers that are local. The general assumption for many across business is that with the offshoring of jobs, skills were also shifted overseas. However, this notion may not be founded in reality. Local suppliers, although may not directly market their capabilities to meet exact needs, may be open to converting their operations to produce new lines. Buyers can no longer routinely overlook their ‘expensive’ neighbors. There may be plenty of opportunity to unlock the potential of the local suppliers. There are also significant advantages for buying local: transportation costs are considerably cheaper; the degree of visibility greater and overall risk and uncertainty reduced. Such benefits may offset the price hikes. Supplier development If local companies do not have the capabilities required, they can be developed. This is known as ‘supplier development’. There are two options available to businesses. Firstly, provide technical assistance to suppliers to support their expansion and development of new capabilities. Secondly, ‘encourage’ suppliers to merge with others in order to increase their ability to produce scale. The latter of these has been frequently practiced by companies like IBMs that have used the technique to create suppliers that can service the Big Blue’s global demand. But there is no reason why a company cannot deploy the same approach to bringing together a collection of smaller suppliers to forge a consortium to match the short-fall created by the shortened supply chain. Bend the ear of the government Recently within the UK context, the Japanese auto producer, Nissan, has suggested that the British government should help develop the local supply chain. If the politicians are serious about rebuilding local industry, then tax breaks, subsidiaries and other policy incentives can quickly ensure that the local economy is not impact by protectionist moves. As such, companies can enlist governmental support in their local supplier development efforts. Invest in automation The make or buy calculations, upon which most business are founded, mean that a return to local production is financially unviable. Simply put: the cost of labor within the developing world is incomparably cheaper than American or European workers. The only means that large companies can build a model of equivalent cost is through removing the human element entirely and build automated factories. Many are already doing this. Adidas, for example, have built a new factory in Munich, Germany. The ‘Speedfactory’ operates with almost no direct human interaction. The more observant commentators will note that such undertakings will ‘bring back’ relatively few jobs. Innovation incubators A more long-term option, is to cultivate more innovative relationships with small businesses or universities. Companies have established ‘innovation incubators’ that provide local inventors with an environment to develop their ideas. The company can provide commercial guidance and enjoy the benefits of early investors that are derived from final realization. Obviously, this will not allow for production to switch for some years, but investment into incubators clearly demonstrates commitment to the local entrepreneurial community. Insourcing A final option is to internalize supply chain capabilities directly into the business. This involves acquiring a supplier. Usually, this entails the retention of the physical capital in situ. Clearly, for most transnational companies, productive assets are overseas and often in the hands of suppliers, but this problem can be overcome if companies are willing to engage in the expensive business of transporting plant. However, savvy organizations can look to see where local companies, who may not have developed the precise solution to match the need, may be open to direct intervention as part of a new corporate entity.
23c96b8f7c2ae402a25b2ff61b3b017a
https://www.forbes.com/sites/jwebb/2017/06/26/building-a-supplier-relationship-program-create-a-framework-not-a-tool/
Building A Supplier Relationship Program: Create A Framework, Not A Tool
Building A Supplier Relationship Program: Create A Framework, Not A Tool Shutterstock Many organizations get SRM wrong. Buyers assume that it’s a special form of negotiation that can lower prices by bamboozling suppliers through the language of partnership. Supplier relationship management, in fact, involves a lot more work beyond the lip-service of transaction negotiation. The refrain proffered to me by one purchasing executive is, “the negotiation failed, now let’s try SRM.” This looks at a piecemeal approach to SRM. It can be an opportunistic activity whereby buyers seize a moment and look to couch their negotiation in the terms of friendship, in order to persuade a supplier to drop prices. Promises over longer contracts periods, extended payment terms and even the potential of greater volumes in the future are offered in return for a better deal now. This is termed a ‘strategic partnership’. The naked cynicism of such an offer is apparent once negotiations are given the most cursory analysis. The supplier will also see through a proposal that will be likely regarded as a ruse. As such, the ‘strategic re-alignment’ of a relationship will fail. And the results of the SRM initiative will disappoint. As we see often, supplier collaborations often implode. A better way to think of SRM is as a framework, upon which a range of sourcing activities hang, from negotiation to long-term planning. At its core, SRM is about supplier segmentation. As we saw with the Kraljic Matrix – a classic piece of supply chain theory which has endured like few other models in management science – intelligent sourcing is derived from understanding groups of suppliers and their importance to your organization. Once this is appreciated, buyers can target their strategies more accurately and more effectively towards suppliers. The most successful organisations with whom I have spoken use SRM to drive all activities regarding supplier management. It is not only deployed as a tool in special circumstances nor is it only applied only to a tiny proportion of the supplier base. Rather, it is used across the company and across the entire supply base. For effective SRM, every supplier must enter into the SRM process. This applies just as much for the non-strategic paperclip vendors as it does for the critically important suppliers with whom your organization spends a great deal of money. The difference is the treatment that SRM affords to either party. Clearly, more attention and resource is directed towards the critical supplier. We should understand SRM, therefore, more as an input into sourcing and not the output. It acts as a decision-making tool for buyers in determining how to spend their time. Impactful SRM should do two things: Firstly, provide guidance for buyers in mapping suppliers upon the chosen organizational segmentation model. Here the Kraljic Matrix is a good starting point, but greater refinements for the particulars of business priorities should also be made. Secondly, SRM should provide methodologies and pre-prescribed policies for managing suppliers once their segment has been identified. That is to say, for approved strategic suppliers, buyers should establish more advanced governance channels and ensure strong risk management measures. But for transactional suppliers, only rudimentary protections are required. Through building a cross-company framework, the entire organization is aware of company-wide standards and contracting thereby reducing supply chain risk. But also, a standardized form of contracting simplifies administration. Everyone knows the parameters of the framework and little additional time is needed for creating new supplier relationships. Lastly, SRM frameworks communicate to suppliers their relative importance to the organization. For those suppliers that are no considered ‘strategic’ a clear and objective pathway should be apparent for them, should they seek to increase their influence or share of wallet. By thinking of SRM as a one-off activity that is only brought out in peculiar situations, buyer and organizations do not enjoy these widespread benefits of the SRM framework. Indeed, one may make the point that they are not engaged in SRM at all.
3cf766f8bafa028ef77a0df0b5a1f770
https://www.forbes.com/sites/jwebb/2017/06/27/how-to-manage-a-supplier-that-is-bigger-than-you/
How To Manage A Supplier That Is Bigger Than You
How To Manage A Supplier That Is Bigger Than You Shutterstock A question that I am frequently asked from buyers relates to the management of suppliers that have more muscle power than you. As hard as you may try, you cannot lower the quoted prices in the face of a considerably more powerful adversary. Are you doomed to face forever constant inflation? There may be some measures that can be undertaken to postpone a price rise, but a long-term answer lies in changing the nature of the game. Within companies' supplier relationship management program, the Kraljic Model details a segment of suppliers called ‘bottleneck’. These refer to venders that are ‘high risk’ in that their non-delivery will seriously harm the business, but they are ‘low profitability’ in that their goods to services do not immediately add value to the company’s bottom-line. Generally speaking, the supplier is more powerful than you in the negotiation and generally can call the shots. Within a negotiation context, this tends to reflect inflationary prices. This is not for want of trying. On average, 8% of a buyer’s supplier base is represented by bottleneck companies, but over a quarter of buyer man-hours trying to reducing the negative impact of these suppliers on the company. Such suppliers are a waste of internal resources and negotiations chiefly result in price hikes. There are some cheap shots that help stave off the pain. Hiding behind the busy demands of corporate life by reducing availability and failing to respond to emails or meeting requests can postpone forthcoming price rises. I know of a number of buyers that have deployed such tactics to gain a few months on their adversaries. But these techniques are clearly not sustainable and they may also pinch at a buyer’s sense of professional pride. An answer for many buyers is to seek to redress the imbalance by unilaterally declaring the bottleneck supplier as in fact ‘strategic’. This aims to communicate to the vendor that there are long-term prospects for working with the buyer on price in the short-term. A range of incentives can accompany such an offering, from shorter payment terms in contracts to a vague promise of greater volumes in the future. Suppliers tend not to invest must credence in these proposals. They are likely to accept whatever terms through which buyers seek to couch their deals and then look to move price upwards anyway. A more sustainable approach is to look to change the nature of the game. Within the Kraljic Matrix, that is to look to move suppliers in the ‘bottleneck’ segment into another, more advantages category of spend. Here, looking to develop a more innovative approach to supply chain management may produce an answer for a buying organisation. Buyers can ask themselves whether the requisitioned good or service is absolutely necessary as presently specified. Buyers can exercise ‘demand management’ and question the need of an internal customer of purchasing. Perhaps the required resource is not strickly necessary. Or perhaps a slightly different product is required that suits a similar end. The oft-repeated phrase here is ‘buying a hole, not a drill’. Internal customers are not interested in the method, as they are in a solution. An alternative product or supplier may answer that need. Another potential means to dissipate the effects of bottleneck suppliers is through working with other suppliers to enter the market. We see this in certain geographies, where supplier numbers are low. Buyers can aim to develop smaller suppliers that have the potential to add competitive pressure on a marketplace and seek to lower price levels. Again, a more creative approach is required, and certainly a different skillset that is traditionally present within purchasing management, but by deploying such measures buyers can avoid the one-way journey to price rises. The key is not to react to the unfair nature of the game that is present to you, but to seek to change the rules of that game.
5891a6fae4d66136d543087cc81c3bab
https://www.forbes.com/sites/jwebb/2017/10/31/malaysia-reopens-1mdb-case-a-return-to-asias-biggest-corruption-scandal/
Malaysia Reopens 1MDB Case: A Return To Asia's Biggest Corruption Scandal
Malaysia Reopens 1MDB Case: A Return To Asia's Biggest Corruption Scandal (AP Photo/Joshua Paul) The attorney-general of Malaysia has announced that investigations into the 1MDB scandal will reopen. It was thought that the police had concluded its investigation which caused significant business disruption across South-East Asia, but we may see more revelations going forward. 1MDB is a multi-billion national investment board for Malaysia that is aimed on encourage economic development and business creation. The country was convulsed by scandal in 2015 when reporters uncovered evidence of mass corruption. The reopening of the case may bring to the surface previous instances of fraud that were hitherto hidden or investigated. In 2015, the Wall Street Journal alleged that some $700 million of 1MDB funding was directed to the personal accounts of the Malaysian prime minister Datuk Seri Najib Razak. Since then, Swiss prosecutors estimate that some $4 billion has been laundered through the Malaysian fund. The scandal had a huge fall-out, which has affected the Malaysian establishment and businesses across the region, including financial institutions from supposedly squeaky clean Singapore. A Swiss banker was prosecuted for money laundering in Singapore earlier this year. The case involved the laundering of up to $1.7 billion funds previously linked to the 1MDB fund. An Abu Dabi-based sovereign fund is claiming $600 million in refunds following the investigations. Further investigations from Hong Kong and UK authorities have added to the international clamour. Malaysia's own judicial response have been light. Prime Minister Najib Razak was cleared of any wrong-doing by a parliamentary commission in the 1MDB case in 2016. The United States Department of Justice announced earlier this year that it plans to recover $540 million in stolen assets related to the 1MDB affair. Earlier last week, the Prime Minister’s Office issued a statement, saying, “further investigations into 1MDB are currently being done by the police who acted based on the AG’s [attorney-general’s] instructions on Oct 24.” The government unveiled a bumper budget for 2018, which some claimed is a ruse to distract the public from the still unfolding nature of the scandal. Indeed, the prospects of another wave of investigations may deliver another round of corruption allegations at the heart of the Malaysian political and business leadership. Indeed, there is further potential for more foreign companies to become involved in the affairs. Unfortunately, any allegation of fraud or corruption is difficult to investigate and timely to prosecute. The networks alleged to have engaged in money laundering of 1MDB funds are tight-nit and operate at the elite levels of Malaysian society. Hopefully, the new move to redouble Malaysia's own prosecution efforts will help reduce the residual doubts around business operations across Malaysia and indeed South-East Asia.
295e7925b047f731db2fcf9fb9625ba1
https://www.forbes.com/sites/jwebb/2017/12/27/how-to-manage-and-influence-internal-stakeholders/?sh=6de457a871a7
How To Manage And Influence Internal Stakeholders
How To Manage And Influence Internal Stakeholders Shutterstock For buyers, managing suppliers is only half the battle. The real challenge within businesses often lies within the office: internal stakeholders. Here are five tips for gaining buy-in for projects. Influencing suppliers (or ‘external stakeholders’) for buyers is now a matter of routine. From negotiation to post-contractual management (supplier relationship management) buyers are accustomed to interacting with third-parties and aligning them to meet the business need. For new projects, such as developing new products, suppliers can be reluctant to commit and more entrepreneurial buyers will have to stretch their skillsets and manage the change at multiple levels within the organization. As researchers of supply chain management have found, it is overcoming this wall of resistance that can be key to unlocking the benefits of collaborative initiatives. However, for the ideas to gain traction within the organization, buyers must turn their attention to unexpected places and look to manage their colleagues within the business. Typically, these conversations tend to relate to increasing influence over expenditure. Buyers often regret the maverick behaviour of internal customers, who are happy to recklessly spend company money with scant regard to procurement policies. This article reviews five quick tips to engage key stakeholders in supplier projects. Understand the stakeholder community There is a wide stakeholder base that requires buyers understanding and analysis. As a first step, buyers must survey all individuals within the business that may hold an interest in a supplier project. Where a creative campaign is being held, for example, those within the marketing department should be included. Where the initiative involves some technological aspect, IT professionals are relevant. Prioritises needs of key stakeholders Not all stakeholders are equal in importance or relevance for any given supply project. Senior individuals, for example, may have status but may lack the heft to influence final decisions. Spending excessive time trying to influence powerless staff wastes time. Buyers therefore need to prioritise the needs of certain stakeholders. One of the means to do this is mapping parties on a stakeholder management tool. This is comprised of two dimensions: influence and level of interest. Those with both high levels of influence and interest in a project are considered key stakeholders and must be managed with the utmost attention. Conversely, those with low influence and interest levels can be considered non-critical and can be involved in communications, but little direct decision-making. Those that have high levels of interest, but low influence should be kept informed as regularly as practical. Weekly email updates or invitations to monitor key meetings should be sufficient. Those of low-interest levels, but holding a high degree of influence require more direct strategies. Often, this community consists of senior management who demand actions more associated with this level: consider regular but short briefings or presentations, for example, and make available all the detail, where it is requested. Align objectives Once the key stakeholders have been identified, buyers must look to engage them. This starts with meeting key stakeholders with an initial exploratory conversation to understand their goals within the business. Few are naturally aligned with the somewhat narrow objectives of procurement. Typically, buyers look to eradicate costs, whereas most other departments are tasked to spend. Simply understanding their KPIs is a good start, but the end-goal is to build a supplier project that directly speaks to these objectives as well as buyers’ own cost-cutting targets. Speak their language Procurement is full of useful phraseology that provides a handy short-cut to intelligent professionals in managing complex projects and ideas. However, to the outside world, this language represents unattractive jargon. Modifying the ways in which buyers express thoughts can be a surprisingly compelling way to create an interpersonal rapport with key stakeholders. Instead of encouraging marketing co-workers to create a tender, consider instead of asking suppliers ‘to pitch’ ideas. This amounts to the same thing, but the subtle change in words shows the stakeholder that their needs and perspectives are recognized. Develop plans in partnership Buyers, like many of those in business, are perhaps unduly fond of their silos. Their unique boundary-spanning role of managing third-parties can often insulate they from other organizational priorities and conceal company objectives. Once objectives have been understood and a clear means to communicate to this party, buyers can plan more assuredly with their internal counterpart. This moves beyond simply sending minutes to them, but requires input and sign-off to any project of which the stakeholder holds an interest. From here, buyers can clearly demonstrate to the key stakeholder that their goals are being included in the supplier project.
7cc63fbb851ba70194b82b6d318f546f
https://www.forbes.com/sites/jwebb/2018/03/30/how-to-start-a-supplier-relationship-management-program/
How To Start A Supplier Relationship Management Program
How To Start A Supplier Relationship Management Program Shutterstock SRM is difficult to implement in the supply chain and buyers are often unsure where to start. This article provides a briefing on introducing supplier relationship management for the first time. Businesses have a sense that developing deeper and more productive relationships with suppliers represents the future. Companies like Toyota turned excellent supplier management into a competitive edge that converted a small regional manufacturer into the largest automotive company in the world. To emulate this success, companies are looking to build supplier relationship management programs into their own supply chain. This is aimed, in part, to reduce the complexity of managing tens of thousands of suppliers, but also as a means to create more collaborative relationships that yield greater productive value. Unfortunately, failure rates within supplier collaboration projects are high and deep interactions with vendors are sometimes capped by a lack of an integrated or trusting relationship. Such misfortunes can be avoided where a more methodical approach is taken to implementing and managing the supplier base. Here are four steps to introducing successful SRM. Segment the supply base Successful SRM should be considered a framework and not a tool. That is to say, not something that is done to suppliers, but a set of activities which guides buyers into directing their resources to have the deepest impact. This starts with supplier segmentation. The Kraljic Matrix is the default for many managers. This asks companies to examine their suppliers on two dimensions: profitability and risk. Suppliers that have a high-profit impact on the company, as well as a high-risk factor, should be considered strategic and managed accordingly. Those with low profitability and low risk levels conversely should be considered ‘transactional.’ This rudimentary step is often overlooked by businesses. But if SRM is to succeed, it depends on strong processes for mapping suppliers into the right segments. Build a supplier governance framework Secondly, once segments have been defined, buyers must create templated structures by which the relationships are governed. Who is responsible for managing the day-to-day running of the relationship? Who is involved from the business? At what point do we bring in senior executives? The segmented level of a supplier should imply an answer to many of these questions. Strategic suppliers warrant more governance and monitoring, whereas transactional suppliers require little. Build key performance indicators Again, building on the previous stages, once suppliers are segmented and a framework to manage them is in place, organizations can look to defining metrics by which success can be measured. Traditionally, within the world of procurement, businesses look to year-on-year savings reductions as a central key performance indicator (KPIs). However, this is a somewhat blunt tool and struggles to reflect the variety of projects that flow from strategic suppliers. Rather, buyers should consider more diverse metrics for such initiatives. The governance structure should provide resources from which companies can build more accurate KPIs. For strategic suppliers, for example, where executives and stakeholders are more closely involved, the views of these individuals can be converted into ratings which can be in turn expressed through metrics. Create supplier strategies Finally, the sourcing strategies, negotiation levers and approaches to supplier management should be further pinned on the segmentations assigned to them. Where strategic suppliers are subject to greater governance and more nuanced performance tracking, the types of deliverables expected from these relationships are similarly more impactful. Such suppliers should produce major product innovations, process evolutions or some of significant market advantage for the buying organization. Strategies for more transactional suppliers are more modest. Buyers here will rest more heavily upon standard levers such as competition and price negotiation. To conclude, these four factors must also be considered as a whole as well as sequentially. SRM is essentially the framework that derives from each of these aspects. Whereas, for many companies, it is something that is performed in relation to a small number of ‘key’ suppliers, some of the learnings that come from a wider perspective of the supply base is lost and the potential for failure even increases. Successful SRM introduction should look at the whole network of third-parties and implement the above process to all suppliers.
6f59167fecdd6edd3596ae7ff07f09f9
https://www.forbes.com/sites/kaeliconforti/2020/08/03/melbourne-begins-strict-level-4-lockdown-this-week-after-another-covid-19-spike/?sh=4ae9aa3150f9
Melbourne Begins Strict Stage 4 Lockdown This Week After Another Covid-19 Spike
Melbourne Begins Strict Stage 4 Lockdown This Week After Another Covid-19 Spike Nearly four weeks into a strict six-week Stage 3 lockdown that began on July 8, Victoria Premier Daniel Andrews declared a State of Disaster on August 2, announcing a new Stage 4 lockdown effective immediately in the Greater Melbourne area and a new curfew between 8 p.m. and 5 a.m. for the next six weeks until September 13. Regional Victoria, meanwhile, has been elevated to a Stage 3 lockdown, with its own set of travel restrictions. “We must do more. We must go harder. It’s the only way we’ll get to the other side of this,” Andrews said. “I know Victorians are with me when I say, too many people are not taking this seriously. And too many people not taking this seriously means that too many other people are having to plan funerals for those they love.” Andrews also referred to Covid-19 as a “public health bushfire” regarding the resolve needed to stop its spread. Here’s a look at what you can expect for the next few weeks if you’re already in Victoria. Remember to check the Victorian government’s coronavirus website for daily updates as information is likely to change. What Happens During Stage 4 in Greater Melbourne? The purpose of the Stage 4 lockdown is to make sure those in effected areas are actually staying home, which will help curb the spread of the virus. A mandatory curfew is in effect until September 13 so you must be home within the hours of 8 p.m. and 5 a.m. unless you are an essential worker, seeking medical care or need to move to a new location for caregiving. “Where you slept last night is where you’ll need to stay for the next six weeks. There’ll be exemptions for partners who live apart and for work, if required,” Andrews said. “The Night Network will be suspended, and public transport services will be reduced during curfew hours.” MORE FOR YOUShocking Photos Reveal Full Horror Of Norway LandslideWhat You Need To Know About Canada’s New Covid-19 Test Requirements For International TravelersThe Most Exciting International Hotel Openings In 2021 New regulations regarding group sizes are also in effect. You’ll only be allowed to exercise outside your home for one hour per day, no more than three miles away, and with no more than one other person. As for shopping, only one person from your household will be allowed to venture out at a time and you must be within three miles of your home. Exceptions will be made for parents exercising with young children and if the nearest supermarket is farther away. As always, if you’re outside, masks are required to be worn and social distancing measures maintained. On August 3, Andrews assured Victorians that essential businesses like grocery stores, bottle shops, pharmacies, gas stations and post offices would remain open, while other retail shops must close unless they have contactless “click and collect” operations in place. What About Regional Victoria? Cities and locales outside the Melbourne metropolitan area are currently in a Stage 3 Stay At Home lockdown, meaning you’ll only be allowed to leave your home for essential reasons like grocery shopping, caregiving, daily exercise or going to work or school if these can’t be done from home. Sporting events, beauty parlors, museums, cinemas, and other entertainment venues are closed indefinitely, while restaurants and cafés are limited to delivery and pick-up only. Mask-wearing is mandatory if you’re in public and social distancing means you must stay at least six feet from anyone outside your group. Bottom Line: Take This Lockdown Seriously Lockdowns and quarantines are hard, but if everyone follows the rules and the numbers start to go down once and for all, it will have all been worth it. “All the temporary sacrifices we make now—all the time missed with mates, those delayed visits to Mum—those sacrifices will help keep our mates and our Mums and our fellow Victorians safe,” said Andrews. “We can—we will—get through this. Apart. But together.”
d557556da6a9836203a456c0ebbcf5f8
https://www.forbes.com/sites/kaeliconforti/2020/09/27/australias-legendary-parkes-elvis-festival-to-make-its-triumphant-return-in-2022/
Australia’s Legendary Parkes Elvis Festival To Make Its Triumphant Return In 2022
Australia’s Legendary Parkes Elvis Festival To Make Its Triumphant Return In 2022 An Elvis tribute artist performs in Sydney's Central Station before boarding the "Elvis Express" ... [+] train. Getty Images For the first time in its 28-year run, the Parkes Elvis Festival, celebrated in Parkes, Australia, each year on the weekend closest to January 8—in honor of Elvis Presley’s birthday—has been cancelled in an effort to keep the more than 26,000 people who attend every year safe and healthy as Australia continues its fight against Covid-19. “We’ve spent several months contingency planning, monitoring health advice, government restrictions and border closures,” said Cathy Treasure, Parkes Elvis Festival Director, in a video announcement released September 24. “The health and safety of all our stakeholders, including our local community, fans, artists, contractors and staff has always been at the top of our list and with the current situation, we cannot proceed safely with confidence for 2021.” In the video, Mayor of Parkes Shire Council, Ken Keith—dressed in his finest Elvis garb—explained how social distancing and limiting the number of participants would lead to a different type of experience, one that doesn’t embody the true spirit of the festival. “The Parkes Elvis Festival is all about thousands of fans, people coming to celebrate the King, sing along to his hits, dance and socialize throughout the friendliest town in Australia, Parkes,” said Keith. “My message for everyone listening, in the true spirit of Elvis Presley: be kind, walk a mile in the shoes of all that this will impact and let’s stick together through these tough times.” The Parkes Elvis Festival is expected to return January 5–9, 2022, with the theme of “Speedway,” after the 1968 film of the same name. Elvis tribute artists and Vegas-style showgirls wait to board the "Elvis Express" train in Sydney. Getty Images MORE FOR YOUShocking Photos Reveal Full Horror Of Norway LandslideWhat You Need To Know About Canada’s New Covid-19 Test Requirements For International TravelersThe Most Exciting International Hotel Openings In 2021 What began in 1993 as a modest event created by Bob and Anne Steel to honor Presley’s life and help draw tourists to Parkes has since evolved into an annual celebration attended by more than 26,000 people from all over the world. In January 2019, I’m proud to say I was one of them. Toward the end of my Working Holiday Visa year in Australia, I booked a cheap flight from Melbourne to Sydney and a not-so-cheap train ride from Sydney to Parkes so I could experience the festival firsthand. At the time I didn’t realize how quickly accommodation fills up in Parkes during the festival—rooms often sell out as soon as the next year’s dates are announced—and found myself scrambling to find a place to stay at the last minute. Lucky for me, a pub in the nearby town of Peak Hill, about a 30-minute drive from Parkes, was renting out rooms and there was a shuttle I could take to and from the festival each day. Imagine being on a seven-hour train full of Elvis tribute artists. That's the Elvis Express. Getty Images Having been raised listening to all things Elvis, Motown and The Beatles, I was thrilled to experience this unique festival in a part of New South Wales I hadn’t been to yet. Getting to Parkes takes about five hours by car or seven hours by train from Sydney, so I figured, what better way to experience one of the world’s biggest Elvis festivals than by riding there (and back) on Lachlan Valley Railway’s “Blue Suede Express,” the pricier of the two “Elvis Express” trains, on a seven-hour Elvis singalong spectacular with a bunch of people decked out in their finest Elvis outfits and 50s attire? Best of all, the package I bought gave us access to free snacks, drinks and onboard entertainment provided by some of the best Elvis tribute artists in Australia and New Zealand. I quickly found myself chatting with the people around me, meeting some who had been coming for years, others who were going for the first time and taking photos with as many people in Elvis costumes as I could. It was an extraordinary experience, spending a long, hot weekend in a charming Australian town—there was a heatwave when I was there and temperatures hovered around 110 degrees Fahrenheit—surrounded by families, men and women of all ages and cultural backgrounds who had come together to celebrate the life and music of one of the greatest musicians of all time. I have no doubt the festival will be back and better than ever. I just hope Americans will be allowed into the country by January 2022 so we can celebrate with them.
4ec8227b313490c02598fec03c468ea8
https://www.forbes.com/sites/kaeliconforti/2020/11/30/5-things-to-remember-on-your-first-solo-road-trip/
5 Things To Remember On Your First Solo Road Trip
5 Things To Remember On Your First Solo Road Trip Solo road trips can be an amazing way to see the world. getty As of this writing, the Centers for Disease Control and Prevention (CDC) is recommending a pause in travel due to the Covid-19 pandemic, so consider this story as a way to plan for future trips. As someone who’s recently done a two-month solo road trip through the Australian Outback and parts of remote Western Australia—as well as shorter solo drives to national parks in Utah and Arizona and through the Black Hills of South Dakota—I’ve learned a thing or two about driving long distances with only your thoughts and the radio for company and how to pace yourself so you can actually enjoy the ride. Here are five things to remember if you’re planning to do your own solo road trip adventure. Don’t Be Afraid To Go By Yourself The best part of doing a solo road trip is you get to call all the shots—if you feel like pulling over to investigate every silly roadside attraction along the way, go for it! Indulging in your interests and really owning your drive without having to worry about what anyone else wants to do can be extremely liberating. Play all your favorite tunes, sing at the top of your lungs and enjoy every minute of it. This is your adventure after all. Plan Your Route But Allow Time For Extra Stops Yes, you want to get from Point A to Point B in a certain amount of time, but depending on your route, there’s likely still a lot to see and do along the way. Give yourself time to check out any scenic lookout points you pass along the way or visit places recommended by others as you go. Mapping out how many hours you expect to drive each day so you have a basic timeframe for your anticipated arrival in each place is a good idea, but know it’s not a big deal if you’re a few hours late. Download Maps, Apps And Music Ahead Of time Service isn’t as reliable when you’re driving through remote areas, so have a plan. Download any maps ahead of time via Google Maps or Maps.Me, one of my favorite apps to use in the middle of nowhere, or go old school and bring along a paper map just in case—check off the places you’ve been to and keep it as a souvenir. Bring along some of your favorite music or download playlists via Spotify before you go. If you haven’t already, sign up for Spotify Premium and use the free trial period during your road trip. If you’re driving around the U.S., download apps like Gas Buddy to help locate the cheapest prices at gas stations near you—Fuel Map Australia and Gaspy were helpful on road trips in Australia and New Zealand. MORE FOR YOUShocking Photos Reveal Full Horror Of Norway LandslideWhat You Need To Know About Canada’s New Covid-19 Test Requirements For International TravelersThe Most Exciting International Hotel Openings In 2021 Be As Prepared As Possible First, you’ll need to decide whether you want to take your own vehicle or rent one, as you’re going to be adding a considerable amount of mileage and wear and tear. If you’re using your own car, make sure your oil is changed, the tires are in good shape and maybe get a tune up before you go. If you’re going to be driving through extremely remote places, consider getting a satellite phone just in case. Pack plenty of water for you and your car, bring extra snacks and put some blankets in the trunk if you’re traveling during colder weather. It never hurts to be prepared for any situation that might arise. As always when traveling solo, let others know where you are heading and approximately when you should be there. Be Aware Of Covid-19 Travel Restrictions Now is not a good time to be doing a road trip, but if you absolutely insist on traveling during a global pandemic, at least check to see if the states you’re driving to have any travel restrictions in place or require you to be tested or quarantined upon arrival, as health and safety rules vary by state. Also remember to check if any testing or quarantining will be required in your home state whenever you return.
c99de5a87f5850fade3f004e24b26e7b
https://www.forbes.com/sites/kaeliconforti/2021/01/13/study-reveals-the-priciest-and-most-affordable-ski-resorts-trends-for-2021/
Study Reveals The Priciest And Most Affordable Ski Resorts, Trends For 2021
Study Reveals The Priciest And Most Affordable Ski Resorts, Trends For 2021 The study found Deer Valley to be the most expensive ski resort in the U.S. this year. getty A study by HomeToGo, a popular vacation rental search website, recently revealed the most expensive and affordable ski resorts in the U.S., as well as several noteworthy trends for the 2020-2021 ski season. If you are planning to hit the slopes despite the CDC recommending a pause in travel due to the Covid-19 pandemic, pay attention to government health and safety regulations in the places you’re visiting and rest assured that resorts are doing all they can to keep you safe. Here’s a look at the 10 priciest ski resorts in the U.S., arranged from highest to lowest (accommodation rates listed are per person per night): 1. Deer Valley in Utah, with guests spending $515.28 per person per night ($286.28 for accommodation, plus a $229 lift ticket) 2. Beaver Creek in Colorado, with guests spending $512.65 per person per night ($313.65 for accommodation, plus a $199 lift ticket) 3. Aspen Snowmass in Colorado, with guests spending $489.20 per person per night ($295.20 for accommodation, plus a $194 lift ticket) 4. Vail in Colorado, with guests spending $396.11 per person per night ($197.11 for accommodation, plus a $199 lift ticket) 5. Alta Ski Area in Utah, with guests spending $383.93 per person per night ($242.93 for accommodation, plus a $141 lift ticket). 6. Telluride in Colorado, with guests spending $351.96 per person per night ($182.96 for accommodation, plus a $169 lift ticket) 7. Steamboat Springs in Colorado, with guests spending $337.24 per person per night ($112.24 for accommodation, plus a $225 lift ticket) MORE FOR YOUShocking Photos Reveal Full Horror Of Norway LandslideWhat You Need To Know About Canada’s New Covid-19 Test Requirements For International TravelersThe Most Exciting International Hotel Openings In 2021 8. Park City in Utah, with guests spending $327.22 per person per night ($148.22 for accommodation, plus a $179 lift ticket) 9. Big Sky in Montana, with guests spending $325.53 per person per night ($144.53 for accommodation, plus a $181 lift ticket) 10 Stratton Mountain in Vermont, with guests spending $323.58 per person per night ($179.58 for accommodation, plus a $144 lift ticket) Here’s a look at the 10 most affordable ski resorts in the U.S., arranged from lowest to highest (accommodation rates listed are per person per night): 1. Titus Mountain in New York, with guests spending $91.58 per person per night ($52.58 for accommodation, plus a $39 lift ticket) 2. Bridger Bowl Ski Area in Montana, with guests spending $111.28 per person per night ($48.28 for accommodation, plus a $63 lift ticket) 3. Mission Ridge Ski Area in Washington, with guests spending $115.13 per person per night ($38.13 for accommodation, plus a $77 lift ticket) 4. Bolton Valley in Vermont, with guests spending $118.37 per person per night ($19.37 for accommodation, plus a $99 lift ticket) 5. Whiteface Mountain in New York, with guests spending $141.80 per person per night ($81.80 for accommodation, plus a $60 lift ticket) 6. Sugarloaf Mountain in Maine, with guests spending $144.94 per person per night ($99.94 for accommodation, plus a $45 lift ticket) 7. Mount Bohemia in Michigan, with guests spending $145.27 per person per night ($60.27 for accommodation, plus an $85 lift ticket) 8. Mad River Glen in Vermont, with guests spending $145.81 per person per night ($53.81 for accommodation, plus a $92 lift ticket) 9. Mad River Mountain in Ohio, with guests spending $153.32 per person per night ($99.32 for accommodation, plus a $54 lift ticket) 10. Mount Baker in Oregon, with guests spending $154.07 per person per night ($85.49 for accommodation, plus a $68.58 lift ticket) The study also highlighted several trends regarding guests’ spending habits this year, including a preference for private cabins within a half mile of ski lifts, which are being reserved almost twice as often as accommodations farther away. Cabin bookings in general have risen by 345% in 2021, while hotel bookings are down by 7% and vacation rental bookings have decreased by 12% compared to the 2019-2020 season. Due to Covid-19 travel restrictions and the closure of bars and restaurants in several popular U.S. ski towns, there’s been a rise in demand for self-catering set-ups, with guests seeking homey amenities like fireplaces, Wi-Fi access, balconies and other outdoor spaces, fully equipped kitchens, and child-friendly extras like cribs and high chairs. Visitors are also spending less time in ski resorts this year, with the average reservation lasting two or three nights instead of the usual three to four nights, likely due to ever-changing travel policies and what’s going on locally with the pandemic.
b0da0d6dedc78deacd44c45fe7ccff21
https://www.forbes.com/sites/kaifalkenberg/2019/05/29/settlement-of-suit-over-tom-brady-photo-leaves-major-online-copyright-issue-unresolved/
Settlement Of Suit Over Tom Brady Photo Leaves Major Online Copyright Issue Unresolved
Settlement Of Suit Over Tom Brady Photo Leaves Major Online Copyright Issue Unresolved The Daniel Patrick Moynihan Courthouse in New York. (AP Photo/Mark Lennihan) ASSOCIATED PRESS Yesterday marked the conclusion of a major copyright suit raising serious questions about the way media companies do business online. The case was voluntarily dismissed following a settlement earlier this week with Time Inc. and with consent from the last remaining defendants, Oath (formerly Yahoo!) and Heavy.com. In 2016, Justin Goldman snapped a photo of Tom Brady and Celtics general manager Danny Ainge on the streets of East Hampton, New York. Goldman uploaded the photo to his Snapchat story, where it then went viral, raising speculation that Brady was helping the Celtics recruit basketball player Kevin Durant. Users copied the photo from Snapchat and reposted it to Reddit and other platforms, and eventually to Twitter. Media outlets then embedded the tweets with the image into articles they wrote focused on Brady’s role in sealing the deal to get Durant. Goldman filed a copyright suit against the outlets that used his photo, which in addition to Time Inc., Oath and Heavy.com included Breitbart, Vox Media, Gannett Company, Herald Media, Boston Globe Media Partners and New England Sports Network. The suit claimed the sites infringed on his copyright, violating his exclusive right to display his photo. The media defendants, relying on a landmark California case, argued there was no infringement since the image wasn’t hosted on any of their sites; it was hosted on Twitter. But in a surprising decision issued in February 2018, a federal district court judge in the Southern District of New York rejected that argument. “Having carefully considered the embedding issue,” Judge Katherine Forrest concluded, “that when defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield from this result.” The case turns on the distinction between hosting and embedding an image. To embed an image, a media outlet adds an “embed code” that directs a user’s browser to a third-party server where the image is hosted to retrieve that image. The embedded image then hyperlinks to that third-party website. Most social media sites like Facebook, Twitter and YouTube provide code that media outlets can copy to enable embedding on their own webpages. MORE FOR YOUVerizon’s Full Transparency Initiative Has Wide-Reaching Ramifications In The Fight Against Fake News In a seminal California case, Perfect 10, Inc. v. Amazon.com, the Ninth Circuit adopted what’s become known as the “Server Test.” Under that rule, liability for infringement is based on where the image is hosted. If it’s stored on a third-party server and accessed by “in-line linking,” which works like embedding, then there’s no infringement. It’s a rule that media companies had viewed as settled law for over a decade, expecting the judge to apply the same test here. But Judge Forrest rejected that approach. When a user visits a website with an embedded tweet, she noted, the user sees a mix of text and photos that’s seamlessly integrated, even if the underlying images are hosted elsewhere. “The plain language of the Copyright Act, the legislative history undergirding its enactment and subsequent Supreme Court jurisprudence,” she wrote, “provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have ‘displayed’ a work within the meaning of the Copyright Act.” The media defendants sought an immediate appeal to the ruling, warning that it would “cause a tremendous chilling effect on the core functionality of the web.” Judge Forrest granted consent for that appeal given it’s a “high-impact copyright case” where opinions might differ. But the Second Circuit declined to take on the case, finding an interlocutory appeal “unwarranted.” So last summer, the case returned to the district court, where it was soon headed for trial. As discovery proceeded into this spring, several of the defendants opted to settle, with the latest defendant, Time Inc., agreeing to a settlement earlier this week. With just two media defendants remaining, Goldman’s counsel wrote in a May 9 letter to the judge that the “cons” of continuing the case outweighed the “pros” and that it was “no longer worth litigating.” Yesterday, with consent from the last two defendants, Oath and Heavy.com, the suit finally came to an end and was voluntarily dismissed. That means there’ll be no appeal in this case and the legal uncertainty it created will continue—but not for too long. A number of cases are now pending in the district court that present the same “embed” legal issue. In the interim, media sites are forewarned that embedding remains a risky proposition.
301ff8e4e7ab02d98ef97d39db7c5352
https://www.forbes.com/sites/kaipetainen/2013/03/06/the-big-business-of-wycliffe-bible-translation-and-why-they-might-lose-money/
The Big Business Of Wycliffe Bible Translation And Why They Might Lose Money
The Big Business Of Wycliffe Bible Translation And Why They Might Lose Money Bible Translation is big business and if you stop to look at their tax forms, the results might surprise you. The business of selling Christianity ("winning hearts to God") in non-profit organizations is huge.  If we look at Public Charities as a whole in the USA, the business of religion (not including churches) has 25,755 organizations, $29.5 billion in assets, and they pull in $18.3 billion in revenue each year.  You can see this information at the National Center for Charitable Statistics (NCCS) website.  If we segment that down to Christianity (code X2), then they have 18,766 organizations, $13.1 billion in assets and revenue of $9.2 billion each year.  For some perspective on two popular agencies, the Billy Graham Evangelistic Association gets $153.5 million each year, and Focus on the Family (the 3rd largest Media organization) gets $113 million each year. Now, this isn’t a theological argument that Bible translation isn’t important, but rather these question must be asked:  Are they doing what you think they are doing?  Is the organization efficient?  And what does the organization actually look like? In business, I’ve heard that people can lie to God and they can lie to their fellow man; but, they cannot lie to the IRS or the SEC.  After all, people can file complaints to the SEC and they can complain to the IRS as well, and sometimes they get money from ‘whistleblowing’.  To be a bit cynical, when it comes to the pocket-book, sometimes the SEC and the IRS can get to it before God does.  God forgives, but the SEC and the IRS might not.  So if someone was telling me that they print bibles and the money was spent on translation, then the IRS forms should reflect that idea.  Combining both ideas, I prayed to God and looked at the IRS forms. According to Forbes, Wycliffe Bible Translators is #70 on Forbes list of the 100 Largest US Charities with total revenue of $167 million.  According to the 990 forms that are posted on the Wycliffe website, “Wycliffe Bible Translators, Inc.” has gross receipts of $152 million and ‘Other Salaries and Wages’ are listed at $108,158,283.  The business of translating bibles requires over $100 million in salaries each year? These particular 990 forms weren’t available on the NCCS website and Wycliffe states that they don’t need to disclose or file the 990 forms: For the purposes of tax regulation, the Internal Revenue Service recognizes Wycliffe Bible Translators as a 501(c)(3) organization and a church.  Based on IRS Regulation 1.6033-2(g)(1)(iv), Wycliffe is therefore neither required to file Form 990 nor to disclose it under the recently enacted public disclosure rules.  Wycliffe, however, being committed to transparency and accountability, voluntarily completes Form 990 each year, even though it is not required to file the form with the IRS. Although Wycliffe states that it doesn’t have to file a 990 Form, I was able to find some 990 forms on the NCCS website.  A simple search for Wycliffe on NCCS gives a listing of active organizations.  The Wycliffe church is located on this list, and it is listed as ‘Wycliffe Bible Translators Inc’.  Sure enough, it is a 501(c)(3), X20 (Christian), 10 (Church). So, first and foremost Wycliffe Bible Translators is a church in the land of amusement parks and high net worth seniors – Orlando, Florida. What else do they do? Along with Wycliffe’s church in Florida, there is also a golf club named Wycliffe in Florida – but they are not affiliated with one another.  In Wellington, Florida, the ‘Wycliffe Golf and Country Club Inc.’ has assets of $53 million and pulls in revenue of $12.6 million each year.  If I search for ‘golf club’ and sort by assets, this is the 2nd largest charitable Golf and Country Club in the USA.  The Nanea Golf Club in Hawaii has $53 million in assets but they have less revenue ($4.7 million).  The Wycliffe Golf Club should not be confused with the Wycliffe Bible Translators.  I called the golf club and they told me that they have absolutely no affiliation with the Bible translation group. Although Wycliffe Translators bluntly state that they don’t have to file or disclose a 990 form, they have an IRS 990 form on the NCCS website.  Do they have to file that form and disclose it (which would appear to contradict what they said earlier)?   The form is listed under the name ‘Wycliffe Bible Translators International Inc’ and it’s located in Dallas Texas.  The organization is listed as a 501(c)(3) – a public charity; an X-20 (Christianity); and as an ‘Organization with a substantial portion of support from a government unit or the general public’.  According to that, it’s not a church and my limited understanding of 990 forms, tells me that they need to show the public that form.  Although Wycliffe has a website on ‘Our Accountability’, this particular 990 form is not listed on their website.  There are some who might argue, that by only showing some of the 990 forms, and not all the 990 forms, Wycliffe might be exhibiting some form of ‘selective disclosure’ of the 990 tax forms on their website. Based on IRS tax forms, what does Wycliffe Bible Translators International Inc, do? Under total expenses (Line 18), they list $5,032,348 in expenses.  They explain what some of this expense is used for in Part III, sections 4a and 4b. $3,890,463 or 77% of the expenses go towards advocacy, training and building the Wycliffe network – as they have 1500 language programs, and 5500 staff members.  For that portion, they state: Acting as advocate for language communities and the vital role of Bible translation among international mission and Church networks, serving as catalyst for interorganizational cooperation in Bible translation and Scripture access movements Building Capacity in Member and Partner Organizations (includes board training, leadership development, and the other types of capacity building) Wycliffe International's over 100 member and partner organizations have some form of  involvement in almost 1500 language programs, and are involved in 68% of all language development and Bible translation programs happening worldwide, involving more than 5500 staff At Wycliffe, according to the IRS forms, Bible translation has $518,808 in expenses.  That represents only 10% of the total expenses.  For that portion in the 990 form, they state: Language Programs - Wycliffe oversees, provides services for and coordinates funding for approximately 20 language programs (which involves language development, literacy and Bible translations) Compare Wycliffe’s numbers to that of Pioneer Bible Translators (also in Texas).  Pioneer is comparable to Wycliffe ($5,032,348 in expenses) in that it has $5,798,180 in total expenses.  Wycliffe’s Bible translation portion in the IRS document lists that $518,808 (10% of their total expenses) go towards language programs.  Pioneer on the other-hand lists $4,601,281 in Bible translation or 79% of their total expenses.  At Pioneer, another 7% goes towards ‘Recruitment’ and 5% goes towards ‘Missionary Care’.  As stated by Pioneer for their translation portion: Bible translation, literacy, and related mobilization and discipleship activities PBT teams around the world are ministering to 19 million people in 53 different language projects During 2011, PBT personnel processed over 175,000 verses of Scripture from one stage of the translation process to the next This includes rough drafts, exegetical check, comprehension check, peer check, consultant check, published, and revised Numerous individual books of the Bible were published, along with hymnbooks, Scripture calendars, Bible dictionaries, and testimonies Audio recordings and DVD's in local languages were also produced In the Wycliffe 990 document, it lists that the ‘Public Support Percentage’ takes up 98.552% of their total support.  For example, in 2010 they had $4,963,275 in total public support.  From 2006 to 2010, they received a total of $22,229,726 in public support.  I should note that in 2006 they received $2,335,696 ‘from admissions, merchandise sold, or facilities furnished’.  In 2007, this number dropped from $2,335,696 to $3,902. If we were to divide the expenses even futher, then what would it look like? As a personal opinion, these items raise an interesting question that goes beyond the scope of this article, and this idea is perhaps best reserved for theological debates.  Here’s the question – Wycliffe is translating 20 bibles and they have 5,500 staff members.  As they have a lot of staff, and they received $22 million in support over 5 years – is that an efficient use of the resources and money?  And when you have that much money and that much in staff resources, then why does it take such a long time to translate a Bible?  I’ve heard that bible translation is a hard process, but they have 5,500 staff members!  According to Pioneer they "processed 175,000  verses of Scripture" in 1 year -- so if Pioneer can translate that much, it shouldn't take Wycliffe 15 years to translate a New Testament for one language group. Let’s assume that the donated money goes to Wycliffe and it is allocated from there.  If the IRS forms were used as a representation of how that donation is spent, what would it tell us?  Let’s look at how the expenses are allocated and use that as a proxy to where the money goes.  The biggest expense at Wycliffe is ‘Travel’ at $2,005,938; followed by ‘Grants and other assistance to governments, organizations and individuals outside the US’ at $902,430, and ‘Other’ at $635,812.  As a comparison, Pioneer spends only $913,913 on Travel, $109,421 to those outside of the USA, and $61,707 on ‘Other’.  Compared to Pioneer, Wycliffe spends 119% more on Travel, 725% more on those outside the USA and 930% more in the category of ‘Other’.  If we were to put Wycliffe's expenses into a pie chart, then it would look like this: Wycliffe Expenses And if we were to extend that same idea to a donation of $10,000, then it could lead us to this proxy: If 990 Form is Proxy of Where $$ Goes If Wycliffe is in the business of translating Bibles, then how much do they actually spend on ‘Printing and Publications’?  According to the tax documents, Wycliffe gets $5,059,564 each year but they spend only $88,988 on ‘Printing & Publications’.  For a business where missionaries request money for Bible Translation and the printing of Bibles, $88,988 is a very small amount as compared to the total expenses.  The concept of how little Wycliffe spends on printing is illustrated in the March 1st, 2013 Albany Democrat-Herald article by Amanda Robbins, “African literacy project is no joke for Ker family”, Ker and his wife had been working with Wycliffe Bible Translators for 17 years where they translated the Bible for the people of Mozambique. “I thought it was a pretty clear process,” Ker said. “I work with Mozambicans to translate the Bible, and then when we’re finished anyone who wants one can have a Bible of their own.” Now, he knows that’s not usually how the process works. The Bibles get translated, but then the Bible agencies can take years to get them printed, and usually in small numbers. Once they are printed, the Bibles aren’t always distributed in the area where the language is spoken. Wycliffe is also facing criticism in how they translate, and in April they could suffer a blow to their finances... Wycliffe is facing controversy in the field of Bible translation itself, and in April they may face consequences over how they translate the Bible. In the Tom Breen, HuffPost article on April 26th, 2012, “Wycliffe Bible Translation Criticized Over Trinity Word Substitution In Muslim Countries”, it talks about a current debate on translation: Last month, Wycliffe agreed to an independent review of its policies by the World Evangelical Alliance, which plans to appoint a panel of experts to determine whether Wycliffe and affiliated groups are improperly replacing the terms "Son of God" and "God the Father." The decision comes after a growing number of critics decried the materials as attempts to avoid controversy that fundamentally altered Christian theology. The dispute moved from Internet forums and online petitions to concern from large Christian bodies. The Assemblies of God – one of the largest Pentecostal fellowships, with more than 60 million members in affiliated churches worldwide – announced it would review its longstanding relationship with Wycliffe. On Wycliffe’s website, they state this: we have asked a respected third party—the World Evangelical Alliance (WEA)—to review our practices. We anticipate their response by April 2013. Those who hold various positions in this controversy over how best to translate the terms referencing the Son of God and God the Father agree that getting it right is critical. The point being contested (in a debate that has gone on in various forms for centuries) is whether accuracy of meaning is more important or less important than the exactness of a word. The “Assemblies of God” has supported Wycliffe for many years, but they too are questioning Wycliffe’s actions with regard to translation.  As stated in an Assemblies of God paper on their website: Representatives from Assemblies of God World Missions and Assemblies of God U.S. Missions expressed disagreement with Wycliffe Bible Translators and Summer Institute of Linguistics leadership in meetings held on August, 2011 and November, 2011. An article entitled “Essential Scriptural Integrity” appeared in the March 4, 2012 edition of the Pentecostal Evangel. Author Randy Hurst, communications director of Assemblies of God World Missions, states “AGWM missionary leaders, missiologists and scholars have met twice with leaders of Wycliffe and its partner ministry, Summer Institute of Linguistics (SIL), to deal with the increasing disagreement concerning Bible translation practices. On the Assemblies of God website, this article by Randy Hurst, notes that Wycliffe may face some serious consequences: Because of Islamic beliefs concerning the Trinity and Jesus as the Son of God, some translations designed for Muslim readers remove familial terms, such as Father, Son, Son of God, Son of the Living God, and Son of Man from the text. In their place, they use alternative terms, such as, “Beloved of God,” a familiar Arabic Muslim characterization often used when referring to Muhammad. AGWM has established a 4-month review period until May 15,at which time they will make a final decision concerning its ongoing relationship with Wycliffe/SIL. The consequences could include asking AG personnel to leave Wycliffe/SIL, recommending that AG churches withdraw financial support for Wycliffe/SIL personnel, and engaging in translation ministry with other organizations holding a position on Bible interpretation compatible with AGWM convictions. Bible translation is big business and Wycliffe is a big business in the field of Bible translation.  April 2013, could be an important month for Wycliffe.  [Correction, originally I stated May 15th will be a big date for Wycliffe].  If the Assemblies of God and other organizations cut off Wycliffe, then it will have some impact on the bottom line at their organization.  Perhaps then it’s no surprise that Wycliffe is working hard on its fund raising efforts.  Back in February, Wycliffe announced that they had hired a ‘Vice President for Innovative Strategies’.  And on March 4th at Philanthropy.com, they announced 3 jobs for ‘Major Gifts Officers’ and 2 for ‘Gift Planning Advisors’.  Based on the job description, it sounds like Wycliffe is going after big money from high net worth donors.  As the job description states: Responsibilities include contacting, cultivation, and solicitation of gifts from major donors in these areas. Requires at least five years of recent and increasingly productive face-to-face fund-raising experience, including two years with high net worth donors. Kai Petainen's views are his alone, and do not reflect the views of the Ross School of Business or the University of Michigan.  Kai looks at companies from a quantitative point of view -- what does the data show?  Kai teaches a class on quant screening, F334 -- Applied Quant/Value Portfolio Management, at the Ross School of Business. Kai is a MFolio master at Marketocracy, and is featured in Matthew Schifrin’s book, "The Warren Buffetts Next Door".
85a22186b17f46279d7cc4d90b7f7099
https://www.forbes.com/sites/kaipetainen/2014/04/14/students-compete-in-online-macro-trading-competition-and-win-an-internship-at-fortress-capital/
Students Compete In Online Macro Trading Competition And Win An Internship At Fortress
Students Compete In Online Macro Trading Competition And Win An Internship At Fortress Imagine competing in an online trading competition as a student, and winning a summer internship based on the results. That’s exactly what happened to two students, one from Queens University and another from the University of Pennsylvania, who won the Fortress Global Macro University Challenge. Over the past six months, 1,250 students from 22 schools competed in an online trading competition at UpgradeCapital.com. Two students won internships at Fortress Investment Group and the top performers split $15,000. The competition wasn’t limited to just stocks, but it was open to ETFs and currencies as well. An overview of the participants. (Graphic courtesy of Upgrade Capital) I chatted with Alexey Loganchuk at Upgrade Capital about the competition and here is what he had to say: Kai Petainen: What is Upgrade Capital’s role in the competition? Alexey Loganchuk: The Fortress Challenge was hosted on Upgrade Capital’s proprietary trading and communications platform. By analyzing student performance and activity, Upgrade Capital has been able to decide which competition participants displayed sufficient dedication and promise to be granted access to Fortress portfolio managers and traders. Petainen:  How is the Fortress Challenge different from other online stock competitions? Loganchuk: There are three major elements that make the Fortress Challenge unique: The Fortress Challenge places great emphasis on education. It is the only cross-university trading competition run in direct and ongoing collaboration with the senior risk-takers at a leading investment firm. As the Challenge progresses, top performers are provided with ongoing guidance and mentorship by Fortress portfolio managers, traders, and quants. In short, students are offered an opportunity not only to prove themselves but also to become better investors. The overwhelming majority of major cross-university competitions place a heavy emphasis on bottom-up analysis and stock picking. In this competition, Fortress challenges students to make sense of the bigger picture by encouraging them to engage in top-down analysis and to build macro-driven investment strategies. The identities of top performers will be made known to firms other than Fortress. It makes little sense for each firm to have their own competition – both job applicants and employers benefit if all performance records and investment research are gathered in a single place and analyzed in a consistent manner. Petainen:  One common criticism about virtual online competitions, deals with how a competitor can win based on one 'lucky' stock pick.  So it's hard to differentiate between a lucky pick vs. a skillful investor.  Did you take this into consideration, and if so, how? Loganchuk: We treat quantitative measures of performance as merely a starting point for assessing students. After sorting all competition participants on the basis of Risk Adjusted Alpha, a metric which strips systemic risk out from returns and factors in risk taken, we work with Fortress to go through individual competitors’ detailed trading records and trade commentary for the top 50 performers.  There is no substitute for a manual assessment by an experienced investor. We also run multiple programs outside the trading competition to give students more ways to distinguish themselves. This year they included the Ben Graham Fellowship, a stock pitch competition run in partnership with the Value Investing Congress; the Fortress Fellowship, a macro trade pitch competition in which the winners gained ongoing access to Fortress portfolio managers; and the Quant Teams Program, where students were challenged to develop their own market tracking and risk analysis algorithms. Petainen: When will you run the competition again?  And how can students join up for the competition? Loganchuk:The next Fortress Challenge will be launched in October 2014. Further information will be posted on the Upgrade Capital website; students wishing to sign up can already do so using this form. Leaders of student investment groups from 17 universities met at Fortress in August 2013. Photo... [+] courtesy of Upgrade Capital. One of the winners was Arun Narasimhan from Queens University. Queens University has a student managed Canadian equity fund that manages $700,000, and they recently received another $500,000 from Mackenzie Investments. Kai Petainen: Can you talk a bit about your strategy? Arun Narasimhan: I strive to capture macro trends.  Being a student, the most challenging aspect of macro trading is staying on top of market developments.  To filter market activity, I run a few models that fall into trend-following or factor model buckets.  These signals are often used to guide my macro and sentiment research. Petainen:  What was your best trade? Narasimhan: Shorting the Canadian dollar in mid-October comes to mind. For the first time ever, there was a strong confluence of agreement between all my models and macro fundamentals. Meanwhile, Canadian front-end rates were pricing in rate hikes in anticipation that the Bank of Canada would tighten ahead of the Fed.The Loonie subsequently fell 10% against the US dollar and it was the most profitable G10 major currency trade of the past two quarters. Petainen:  What was your worst trade? Narasimhan: [I bought the] USDTRY just hours ahead of the interest rate decision in January.  Locked between a large current account deficit, eroding central bank reserves, a political crisis and a broad emerging markets sell-off, I thought a rate hike would not be enough to stop the lira's depreciation.  The benchmark rate was then raised by 550 basis points to 10% - the lira soared. I changed course by using the South African rand to fund a long position in the lira, which helped recover some of the prior losses.  The experience taught me to resist the temptation to trade ahead of key economic numbers & monetary policy decisions in markets where the local central banks have no qualms attempting to defy economic gravity. Petainen:  Any other thoughts? Narasimhan: As a markets junkie, I benefited tremendously from the feedback provided by the portfolio managers at Fortress. Not only has the past academic year marked an increase in my macro awareness, but it has also altered the way I think about markets as I now spend most of my time formulating a viable variant view. Alexey Loganchuk has sparked a revolution in the forward looking hedge fund industry whereby precise skill evaluation is starting to take precedence over anecdotal factors. I couldn't be more grateful for the opportunity to be part of this paradigm shift. -- Kai Petainen's views on the market and stocks are his alone, and do not reflect the views of the Ross School of Business or the University of Michigan. Kai teaches a class on quant screening, F334 -- Applied Quant/Value Portfolio Management, at the Ross School of Business.
6b11fd00f6c775d05360879479ae5bcf
https://www.forbes.com/sites/kaipetainen/2016/10/03/binghamton-students-short-gatx-and-take-the-top-prize-at-ross-stock-pitch-competition/
Binghamton Students Short GATX And Take The Top Prize At Ross Stock Pitch Competition
Binghamton Students Short GATX And Take The Top Prize At Ross Stock Pitch Competition The Ross School of Business hosts an undergraduate stock pitch competition each year, and students come from across the country as they compete for a cash prize. This year, Robert Pim, Jonathan Heller, Brandon Fine, and Ronnie Sanon from Binghamton University came first, as they won the final round and pitched GATX Corporation as a stock to short. The competition was held on October 2nd, 2016 and students were allowed to pitch stocks with:  a market cap above $500 million, a daily trading volume of at least 1000,000 shares, and a stock price above $2.  The pitches went through two-rounds and the final round had judges from Janus Capital, GCP Capital Partners and Excurro Capital Management.  For the first round, judges came from Apollo Global Management, Och-Ziff Capital Management, Matrix Capital Management, Davidson Kemper Capital Management, Citigroup, Allen & Company, Perella Wienberg Partners, Ares Credit Group, TPG Capital, Bridgewater Associates, and Garda Capital Partners.  The keynote speaker for the event was John Griffin from Blue Ridge Capital.  The event itself was organized by the Michigan Interactive Investments club and sponsored by Mainstay Capital Management, Grosvenor, Stu Hendel, Janus, Flow Traders, Investor’s Business Daily and Barrons. Disclosure:  Kai works at the Ross School of Business. Robert Pim, Jonathan Heller, Brandon Fine, and Ronnie Sanon from Binghamton University -- Photo by... [+] Kai Petainen I had a chat with Pim, Heller, Fine and Sanon and this is what they had to say: Binghamton runs a student managed fund.  Could you tell me more about it? Our school does have a student managed fund, the Binghamton University Investment Fund, which was actually just reorganized from a class into a program this year managing almost $300,000.  Under the new program structure, nine upperclassmen senior analysts cover nine sectors of the S&P, and are supported by two or three underclassmen junior analysts in researching stocks and pitching investment recommendations to the fund members.  The goal of this new program is to further streamline practical finance education for students outside of the classroom, and hopefully continue to build upon the recent success Binghamton University has seen in placing students to front office finance.  Three members of our team are part of the fund, two of whom are senior analysts and portfolio managers, and one of whom is a junior analyst. The fund is long-only at the moment, and so we cannot pitch GATX [for the fund]. What stock did you pitch and what was your recommendation? We pitched GATX Corporation (NYSE: GATX), which is the only publicly traded pure railcar leasing company.  GATX owns railcars and leases them out to railroads and companies that ship their product by freight rail. During the boom in commodities production and prices leading up to 2014, railcar manufacturers responded with record production.  However, by the time these railcars were manufactured two or three years later, the demand that these railcars were produced to meet had evaporated, as a result of the decline in the prices of commodities such as coal, oil, natural gas, and many metals.  This has caused the railcar market to experience an imbalance of low-demand and high-supply, resulting in railcar prices, and furthermore lease rates, to fall significantly.  This has impeded and will continue to impede the profitability of GATX in the coming quarters. What catalysts might move the stock? The supply/demand imbalance has caused GATX's Lease Price Index (LPI), the primary metric used to gauge their pricing, to fall negative for the first time since the financial crisis.  Another major metric for the company is their renewal success rate, which represents how successful they are at retaining customers.  This number has fallen by almost 20% over the past few quarters, and we are expecting that this will begin to show in the utilization rate, another key metric, in the coming quarters.  The utilization rate reflects the proportion of GATX's fleet that is being leased out and generating revenue, and cyclically when LPI and renewal success rate have declined, the utilization rate comes under pressure as it is harder to find people to lease railcars to.  We believe a decline in the utilization rate over the next few quarters will have a significant negative impact upon GATX's bottom line, and believe that the risk of utilization rate decline has not been fully priced in. Do you see any risks to your investment recommendation? GATX has a very proactive management team, and they have been able to preserve the utilization rate given that a historically small percentage of the fleet is up for renewal in 2016 (10%). However, adding next year's lease expirations, by the end of next year, almost a quarter of the company's leases will have been made at extremely low rates, and we believe this is what the market is not fully appreciating.  Additionally, commodities markets could turn around and demand (as measured by carload traffic) could rebound, though we do not see this occurring in the near term.  Even if it did, the railcar market ordered and produced a significant excess of railcars in the past few years, which is causing a structural and systemic change to the railcar market given that this massive oversupply is a more permanent issue and should suppress prices (and lease rates) for at least the next few years. Why do you dislike this stock more than their competitors? GATX does not have any purely direct competitors that trade publicly. Most of the railcar manufacturers, such as TRN, GBX, RAIL, and ARII have leasing businesses, but they are relatively small percentages of their revenue. In an environment in which railcars are extremely cheap, the option for a railroad to buy from the manufacturers can be more economical than leasing.  As such, we believe that GATX will be hurt most by this market situation, and the market seems to agree given GATX's recent underperformance of these peers. What price target do you have and how did you get it? We pitched a price target of $30 representing a 29.7% discount from the closing price on Sept. 30th.  We utilized discounted cash flow analysis, with a terminal growth rate of 2.5% and a terminal EV/EBITDA multiple of 8.0x.  We also ran a comparable companies analysis, but had greater conviction in our intrinsic valuation given that the peer set of the previously mentioned railcar manufacturers and leasing companies such as CIT do not primarily focus on railcar leasing, and hence do not represent great comps for GATX.
da369a61b3d3380bf2ac61757a1bac69
https://www.forbes.com/sites/kaitlynmcinnis/2020/05/11/qatar-airways-will-give-away-free-flights-to-100000-frontline-workers/
Qatar Airways Will Give Away Free Flights To 100,000 Frontline Workers
Qatar Airways Will Give Away Free Flights To 100,000 Frontline Workers Courtesy, Qatar Airways In honor of International Nurses Day 2020 coming up on May 12, Qatar Airways has announced that it will be giving away complimentary roundtrip airline tickets as a thank you to 100,000 healthcare workers across the globe who have been putting themselves in danger while protecting their communities and saving lives in response to the COVID-19 pandemic. “We at Qatar Airways are incredibly grateful for the commitment and hard work of healthcare professionals around the world who looked after people in these times of uncertainty,” Qatar Airways Group Chief Executive, Akbar Al-Baker, said in a statement on their website. “Their heroic display of kindness, dedication, and professionalism has saved hundreds of thousands of lives around the world.” Healthcare professionals — including Doctors, Medical practitioners, Nurses, Lab Technicians, Clinical Researchers, and Pharmacists — from every country in the world will be eligible to win a pair of tickets. However, in order to ensure the application process is fair and transparent, each country will receive a daily allocation of tickets based on overall population size. The frontline workers who are selected will be able to book up to two complimentary economy class roundtrip tickets (one for themselves and one for a guest) to anywhere the airline flies. What's more, the selected healthcare professionals will also receive a 35% discount to any Qatar Duty Free retail outlets at the airline’s hub at the Hamad International Airport (HIA) in Doha. “We have built a strong level of trust with passengers, governments, trade partners, and airports as a reliable partner during this crisis and we intend to continue delivering on this mission by acknowledging the incredible efforts of these heroes” Al-Baker said. “Our crew and operation has never given up during these past three months, never abandoned hope or their mission to help people get home to their loved ones and we do not intend to do so now.” MORE FOR YOUMaoi: How Travel And Surfing Inspired Models Julia Muniz Robinson And Mahina Florence To Launch A New Swimwear LineOslo Shuts Down As Coronavirus Mutation Strikes NorwayYou Can Use Your Heritage To Get A Second Passport In The EU Those selected will have until November 26 to book their trip, with the option to redeem their flight immediately, should they be trying to get home, or save the airline tickets for a later date when it's safe to travel for pleasure once again. The airline urges that the complimentary flights will be fully flexible for both destinations and dates given the current climate and the surrounding unknown and question marks. “United in dedication, we share our gratitude. Now it is our turn to give something back to those on the healthcare frontline. There are no words or gestures that are enough to repay these brave men and women but we hope that our small offer of a complimentary return flight on Qatar Airways will allow them to enjoy a well-deserved holiday, visit family and friends or explore a destination they have always dreamed of, as travel restrictions start to ease.” Al-Baker added. Frontline professionals around the world can apply directly on the Qatar Airways website starting at 5:01pm ET on May 11 through 4:59pm ET on May 18. The winners will be announced on the following day.
01fa7a64f35c716b14766d59aa056ad4
https://www.forbes.com/sites/kaitlynmcinnis/2020/11/30/this-luxury-rail-and-river-safari-may-be-the-key-to-being-more-mindful-while-traveling/
This Luxury Rail And River Safari May Be The Key To Being More Mindful While Traveling
This Luxury Rail And River Safari May Be The Key To Being More Mindful While Traveling Ker & Downey® Africa Ker & Downey Africa Group has this week announced its new slowed down safari travel that is set to help travelers be more mindful abroad while also significantly lowering their carbon footprint in the process. The 12-day luxury rail and river safari will give travelers the chance to really step back and enjoy the art of slow travel. The trip will take travelers through South Africa, Zimbabwe, and Botswana aboard Africa’s most iconic passenger liners—the Rovos Rail and Zambezi Queen. Beadle Photo Given that most of us have been at home or stateside for the majority of the year due to the global coronavirus pandemic, a more mindful, slowed down approach to adventure travel comes as a great way to ease back into exploring the world without having to jump into the hustle and bustle that these epic international trips usually entail. “I hope that the way people travel will be vastly different from before with a much greater level of mindfulness and consideration towards the environmental and social impact of their holidays. Luxury train travel, as well as river safaris are the perfect way to slow down and experience Africa in a unique way. We hope this slow safari will allow guests to linger longer and truly connect to the destinations they visit in a more sustainable way,” says Lee Kelsall, CEO of Ker & Downey Africa Group. MORE FOR YOUShocking Photos Reveal Full Horror Of Norway LandslideWhat You Need To Know About Canada’s New Covid-19 Test Requirements For International TravelersThe Most Exciting International Hotel Openings In 2021 What’s more, flying in and out of countries on quick or multi-country trips is already known to be the biggest contributor to most traveler’s carbon footprint. In fact, according to IFEU (Institute for Energy and Environmental Research), take-off and reaching altitude are the most fuel-intensive phases of a flight—so even opting for short flights between smaller countries isn’t nearly as eco-friendly as rail or river travel. For more information about the new Rovos Rail and Zambezi Queen River Slow Travel Safari or to book a trip, be sure to visit the official Ker & Downey Africa Group website.
111637d0094fac6b793daa3e568a669d
https://www.forbes.com/sites/kaleighmoore/2019/05/19/new-report-shows-sustainable-fashion-efforts-are-decreasing/
New Report Shows Sustainable Fashion Efforts are Decreasing
New Report Shows Sustainable Fashion Efforts are Decreasing Report shows sustainability efforts are decreasing in the fashion industry. Photo credit: Getty Getty The Pulse of the Fashion Industry 2019 Update report indicates that, as a whole, the fashion industry is slowing down on sustainability efforts. The report, created by Global Fashion Agenda and Sustainable Apparel Coalition in partnership with Boston Consulting Group, uses a scoring system called the Pulse Index to evaluate fashion companies’ sustainability goals and implementation efforts. It showed that while the fashion industry improved its overall score to six points in 2017, in 2018 that score decreased to only four points. The slowdown in eco-friendly efforts is disappointing, considering data that illuminates the fashion industry’s reputation for high waste-producing activities. As of 2016, the EPA estimated rubber, leather, and textiles make up more than 9 percent of all solid waste within the US. “The question is no longer whether it is necessary to improve sustainable business practices, but rather how long it will take before consumers stop buying from brands that do not act responsibly,” the report stated. “The industry cannot wait for the consumer to lead this movement—it is up to fashion leaders to take bolder moves today to transition to a sustainable industry.” Today’s consumers are taking note of fashion brands’ sustainability focus as well. The Pulse report showed shoppers are increasingly interested in fashion brands’ eco-friendly efforts, with 75% of consumers indicating they view sustainability as either extremely or very important to them. Those same buyers are demonstrating their concern by making more purchases from fashion brands that have a focus on sustainability. Over 33% of consumers indicated in the same report they have switched brands to support those that take a public stance on environmental change. What’s more: 50% of of shoppers plan to switch brands in the future to support fashion brands that are environmentally-friendly. There’s an issue here, though. E-commerce expert Tracey Wallace pointed out that oftentimes sustainable fashion brands price their products beyond the average consumer’s budget, which makes it harder especially for younger generations (like those in Gen Z) to afford and adopt sustainable fashion practices. “Right now sustainable fashion just isn't happening at the scale it needs to. Buying sustainable is often presented as something lifestyle influencers and celebrities do,” Wallace said. “It feels like something unattainable. I understand there are supply chain issues, but that must be fixed.” While brands like Athleta and Everlane continue to surge ahead with ongoing sustainability efforts, the transition to more sustainable practices does present a hurdle for most fashion retailers who want to make changes—and not just on the supply chain side of things, either. However, even with the slowing growth of sustainability practices in the fashion industry right now, there are still brands are taking steps in the right direction to increase their eco-friendly efforts—despite the hurdles. Activewear brand Vyayama explained that for them, the reality of adopting more sustainable practices means accepting a slower product development and launch schedule, as the process often takes longer. “We've found there is definitely more time involved in developing, sourcing and marketing when you are committed to sustainability,” said Vyayama founder Rachel Bauer. “We spent over a year just developing custom sustainable fabrics that met our standards.” For others, the key has been finding a marketing angle that complements sustainability efforts before diving in and implementing changes. Body Glove, a surfwear brand, capitalized on the synergy of a partnership with a well-known surfer to push ahead with their eco-friendly efforts. “Making Body Glove more eco-friendly and sustainable has been something we have talked about for years,” said Nick Meistrell, Body Glove’s Marketing Director. “Having Future Olympic surfer Tatiana Weston-Webb join us as part of the team gave us the right launching pad to start shifting the brand in that direction.” Still, for other retailers, sustainability is an all-in effort. For Outerknown, a fashion brand offering both mens and womens clothing, eco-friendly efforts are an important brand pillar. Not only do they publicly share their extensive sustainability framework, but they also have made a commitment to working with partners who have Fair Labor Association accreditation. “There’s a reality that making things sustainably takes longer. It’s more challenging and costs more than making apparel the conventional way,” said Mark Walker, Outerknown’s CEO. “People buy items they’re excited about...they don’t buy ‘sustainability.’ The secret is making an amazing item that people are excited to purchase and having it consciously designed and created,” he said. Walker went on to explain that with growing consumer awareness around eco-consciousness, there’s also a broader shift in customer behavior happening right now, wherein shoppers are moving away from fast fashion. With greater mindfulness about fashion purchases, consumers are looking for quality goods that last longer and are more versatile. The question is: Will other fashion retailers take note and increase their sustainable practices to accommodate this shift in consumer preferences? We can most certainly hope so.
1a71e4bcad5734b8ebefb42f4f660aae
https://www.forbes.com/sites/kaleighmoore/2019/06/11/new-ways-the-beauty-industry-is-testing--sustainable-practices/
New Ways The Beauty Industry Is Testing Sustainable Practices
New Ways The Beauty Industry Is Testing Sustainable Practices Olay is testing refillable packaging for one of its top selling products. Olay Data shows the packaging industry for consumer product goods, which includes personal care and beauty products, generates more than $25 billion in sales worldwide each year. However, this demand comes with a major environmental impact: As much as 70% of plastic waste generated by the industry isn’t recycled. Instead, it ends up in landfills, according to the EPA. Of this market segment, the cosmetics and beauty industry is a large contributor to the waste problem. Zero Waste Week data revealed that in 2018, more than 120 billion units of cosmetics packaging were produced globally—the majority of which were not recyclable. When National Geographic recently took a deep dive into the cosmetics industry’s reliance on plastic and the implications of the waste associated with it, they found that for US-made products, plastic packaging is now used 120 times more than it was in 1960. The good news is that small changes to these practices would make a major positive impact on the environment. Netherlands-based group LCA Centre found that if refillable containers were used for cosmetics, as much as 70% of carbon emissions associated with the beauty industry could be eliminated. Reusable packaging is exactly what beauty brands like Olay are already testing. Olay recently announced that for three months it will test its top-selling Regenerist Whip moisturizer sold in refillable packaging. This test period will begin in October as part of the company’s larger sustainability plans. The effort is projected to save more than 1,000,000 pounds of plastic from entering landfills. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan Anitra Marsh, Associate Director of Brand Communications for Global Skin and Personal Care Brands at Procter & Gamble, explained that this program is just one step within the brand’s larger commitment to making more of its packaging recyclable or reusable. She went on to say that if this pilot is successful, P&G will want to expand it across more product categories. “Olay hopes that this will pilot a new way of shopping for skincare and beauty products that could dramatically reduce the amount of plastic used in the industry,” she said. Other major beauty brands have recently decided to test the waters with more eco-friendly efforts, too. Luxury brand Chanel just announced its minority stake in Evolved by Nature, a “green” chemistry company. However, there are beauty brands within the marketplace like Naturally Serious that are already fully committed to eco-friendly products and packaging. This brand offers recyclable packaging in Forest Stewardship Council certified cartons that are manufactured with wind power in a carbon-neutral facility. Rochelle Jacobs, Managing Director of Naturally Serious, explained that for them, the goal was to create a brand that was not only made up of cleanly-made and ethically- developed formulas, but that also translated the same responsible message through its packaging. “Consumers are showing a great interest and need for a more sustainable lifestyle, and this also means ensuring their beauty products are fitting into this emerging category,” she said. This shift toward more responsible packaging does indeed appear to be something beauty buyers are expressing interest in and appreciate. A Harris Poll survey found that 59% of women over the age of 35 say purchasing eco-friendly beauty products is important to them. For New York-based beauty consumer Sara Zucker, this rings true. “I always read the packaging to see what the company stands for, which may impact my final purchasing decision,” she said on Twitter. “I’m sick of being wasteful, and recycling and sustainable practices make me feel like I’m doing less harm.” Another beauty buyer, Jessica Paoli, echoed this sentiment. “If I already recycle the shipping box, the packing slip, and the promo postcard, I also want to be able to responsibly discard the container for what I actually purchased,” she said. Will beauty brands rise to the challenge and lean into recyclable or refillable packaging? With legacy brands like Olay testing the waters, we can hope that more will follow suit.
b82477b1b3e4d5d1712b926d7b725ff0
https://www.forbes.com/sites/kaleighmoore/2019/06/19/is-experience-based-retail-the-secret-to-long-term-customer-relationships/?sh=12b360283fb1
New Data Shows Impact Of Emotion In Experience-Based Retail
New Data Shows Impact Of Emotion In Experience-Based Retail Experience-based retail like Drunk Elephant's "House of Drunk" works to forge emotional connections ... [+] with shoppers. Photo Credit: Drunk Elephant Experience-based retail isn’t a new idea in 2019. In fact, companies are building entire business models around the concept. Take SHOWFIELDS, for example, which pivots on the idea that customers need more than a standard in-person shopping experience. Offering Instagrammable photo opportunities, events, workshops, and more, it leans fully into the idea that experiences are the secret to consumer attention and buy-in. But adoption of experiential retail hasn’t yet hit the mass market. In fact, PSFK’s 2018 report on the future of retail found that only 55% of C-suite retail leaders plan to invest part of their marketing dollars into building out experiences within retail stores through 2020. Go into an average retail store today and you’ll see that data in action: overall, not much as changed. This is a missed opportunity, according to a new study conducted by Forrester and FocusVision. Their research sheds light on why retailers standing on the sidelines of experience-based retail are missing out on important moments to build emotional connections with shoppers. The study showed the way customers think and feel about a brand experience has an influence factor of 1.5x on their brand-oriented actions. The study also showed that 93% of retailers  believe customers are more likely to spend money with a brand they feel connected to. What’s one of the easiest ways to forge those connections? Physical retail with an experiential element. “Immersive retail experiences are all about creating an environment that helps a customer connect with a brand in-person, establishing an emotional connection that influences how they think and feel about the brand,” said Dawn Colossi, Chief Marketing Officer at FocusVision. MORE FOR YOUStitch Fix Founder Katrina Lake Cracks Into Billionaire Ranks On High-Flying StockStage Is Set For Another Record-Breaking Year Of Retail Bankruptcies: Who’s Next?What It Means When Even Godiva Can’t Keep Stores Open HBR research echoes this thought with data that indicates customers who are fully connected to a brand are 52% more valuable on average than those who feel highly satisfied (but are not fully connected.) So why are brands still slow to participate when it comes to experience-based retail? Not everyone is still on the sidelines. More retail brands, especially those within the beauty and fashion industries, are testing the waters with experience-based retail as a way to forge deeper emotional connections with customers. They're also using experiences to let customers get hands-on with products that are difficult to fully evaluate in an online context, such as cosmetics and clothing, which buyers often want to touch, feel, and try on. Most manifest an “experience” in different ways and at different scales. Beauty brand Drunk Elephant, for example, is going the short-term pop-up route. Recently unveiling its first standalone retail space outside of Sephora in New York City, the brand has created the “House of Drunk,” an interactive retail experience where shoppers can try new products and get tips from on-site beauty experts. Once inside, shoppers get access to the brand’s new formulas and can visit a “skincare confessions” area to record videos with first impressions of new products (which will likely be a powerful source of data and customer feedback for the brand.) Tiffany Masterson, Founder of Drunk Elephant, said the idea behind the House of Drunk was to create an immersive environment that felt exactly like the customer is stepping into the brand itself. “With this experience-based store, we’re giving visitors the opportunity to get insider access, to ask questions, and to try the products,” she said. “The larger goal is to share our brand philosophy and to build deeper connections with those who visit.” Other brands are opting for smaller-scale deployments of this model by partnering with large retailers. Nordstrom, for example, has been rolling out an ongoing series called New Concepts@Nordstrom Men’s. Created by Sam Lobban, Nordstrom’s VP of Men’s Fashion, these are small pop-up shops within select Nordstrom locations that create an experience around select brands in-store. The most recent of these is Concept 004: Patagonia, the fourth of the series. The experience, which features a collection of summer and “Worn Wear” items from Patagonia, gives shoppers the chance to get hands-on with products within an artist’s representation of the brand. The goal: Help shoppers establish an emotional connection to Patagonia’s eco-friendly efforts. “Each of the shop’s spaces has been designed with special fixtures created by artist Jay Nelson from reclaimed and sustainable lumber sources,” said Lobban. “Concept 004 is a showcase of Patagonia’s pioneering efforts in reuse, recycling, and Fair Trade.” These examples of experiences show how brands are capitalizing on a chance to build emotional bonds with customers—but does that mean experiences are the secret to brick-and-mortar retail success? Not quite. Retail analyst Ana Andjelic, for example, believes that in order to produce long-term results, experiences need to be tied to a strategy that works to connect with customers at a deep emotional level, forging long-term brand buy-in. She elaborated on her perspective in a recent opinion piece for AdAge, saying that while the appeal of experiential retail is understandable, there is too much saturation and not enough differentiation between experiences. “They are all alike,” she writes. “The outcome is a Disneyland without themes, pure story-less razzle-dazzle and thrill.” Her recommendation? Brands should instead strive to become a destination for recurring community gatherings rather than a one-and-done Instagram photo opportunity. Fashion retailer Planet Blue is doing just that with event-focused experiences that work to help build emotional connections with customers and position the brand as a community hub. They're hosting a series of in-store events that leverage partnerships with the clothing brands sold under its umbrella and local influencers. Planet Blue's CEO Eddie Bromberg explained that the experience-based events allow them to bridge the gap between brand and customer in a way that just isn't possible online—and in a way that keeps shoppers coming back again and again. So far, the approach is paying off. A recent tie-dye event at Planet Blue’s Santa Monica retail location in partnership with denim brand AGOLDE drew more than 150 attendees and resulted in the store selling three times more AGOLDE jeans than they would on a day without an experience-based event, all while positioning the store as a place for shoppers to meet up with like-minded individuals. No matter where you fall on the debate, it’s clear that experiences can indeed help brands forge emotional connections with customers. The question is: How can they integrate them in a way that keeps customers coming back to keep those bonds alive?
d1b12b0c30f6f010ab9d35765a574450
https://www.forbes.com/sites/kaleighmoore/2019/09/20/how-an-outdoor-retail-brand-leveraged-cryptocurrency-to-engage-young-consumers/
How An Outdoor Retail Brand Leveraged Cryptocurrency To Engage Young Consumers
How An Outdoor Retail Brand Leveraged Cryptocurrency To Engage Young Consumers Brands like Fjallraven are experimenting with digital currency in innovative ways. Getty The apparel industry has been testing the waters with digital and cryptocurrencies for the past several years, experimenting with it as a tool for everything from education, to authentication—and now, to consumer engagement. Babyghost, a Chinese fashion label, for example, launched a branded digital currency by teaming up with VeChain and BitSE to tell customers the stories behind the garments in their upcoming collection. This year, luxury fashion group LVMH (owner of iconic brands like Louis Vuitton and Dior) also announced a foray into blockchain-based cryptocurrency aimed at tracking and proving the authenticity of its goods. Now, digital currency is being leveraged as a way to drive engagement with younger audiences and digital natives. We see this illustrated by outdoor apparel and equipment brand Fjällräven, which launched a virtual scavenger hunt at large universities across the US this August wherein participants could use their phones to hunt for digital currency dubbed “Fox Coins” (named after the brand’s logo.) Fox Coins, a digital currency built on proprietary and programmable blockchain tokens known as Vatoms, served as entries for the brand’s “Kånken-a-Day Giveaway” campaign. Participants could enter to win a free Kånken backpack every day during the month. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan The utility of their digital currency didn’t stop there, however. Participants who entered an email address online were also sent an invite link where they’d find a branded wallet including a Fox Coin as well as access to the geo-locations of other nearby entries (in the form of Fox Coins) on a map via the user’s phone-based GPS. An added bonus: This technology got people outside–which is one of the brand’s core values. “This project reinforces our longstanding values of going on an adventure in nature, navigating new territory, and welcoming everyone to the outdoors,” said Nathan Dopp, Fjällräven’s CEO of Americas. Once collected, Fox Coin owners could then redeem their digital currency for small branded items or discounts on the brand’s higher-ticket items. Dopp went on to share that in past iterations of the Kånken-a-Day campaign, consumers could simply re-enter their email address every day for more chances to win a Kånken bag. With the Fox Coin, however, they were able to test the waters with a new form of storytelling via technology–while still staying true to their legacy and brand heritage. “The focus of the Kånken-a-Day Giveaway campaign was to drive brand loyalty and increase engagement with the Kånken audience,” Dopp said. “We’re investing in virtual technology and shifting our focus to a platform that feels natural to digital natives, and we see this as an important next step in content creation.” The internal strategy behind this campaign and the digital currency was to expand the brand’s presence through a new channel that would allow them to share their 60-year history of sustainability and quality—but with the functionality and flexibility to personalize and gamify the experience. The results of their first run with branded cryptocurrency were impressive: Fjällräven saw a 253% increase in contest entries compared to the previous campaign in November 2018 without the virtual currency element. What’s more: They exceeded their cost per acquisition goal by 56%. "What made this campaign a game-changer was Fjällräven’s vision to use an entirely new ‘post-ad-click’ experience to engage the mobile-first audience we were seeking,” says Tyler Moebius, CEO of digital advertising platform provider FastG8. “We tapped SmartMedia Object technology by air dropping augmented reality enabled coins in order to meet consumers where they were.” Looking ahead, this program will serve as the basis of a forthcoming loyalty program for the brand as well. While this particular experiment with digital currency was successful, Brendan Witcher, Forrester’s Vice President and Principal Analyst of Digital Business Strategy, warns that brands considering their own forays into crypto or digital currencies should be certain those experiences are rooted in value (rather than pure novelty.) “Novelty can work to build a customer base, but ultimately those experiences need to be replaced with something that adds meaningful value in order for the relationship to be sustainable,” he said. He went on to say that companies looking to utilize digital technology to improve customer experience need to take a hard look at what customers expect from experiences today. “A mistake companies often make is creating something innovative, like a cryptocurrency application, when they haven’t gotten the basics of good customer engagement right yet,” he said. As other brands consider how similar tech might be beneficial to their audiences, they’ll need to evaluate whether or not they’ve first perfected their primary forms of customer engagement—like email, social media, and in-store experiences.
2021a263434e7bb55f44a555e27160b8
https://www.forbes.com/sites/kaleighmoore/2019/10/07/how-glossiers-new-employee-program-gathers-rich-customer-insight/?sh=52e8499e202d
How Glossier’s New Employee Program Gathers Rich Customer Insight
How Glossier’s New Employee Program Gathers Rich Customer Insight Full-time employees at Glossier can work a shift on the floor of one of the brand's retail stores to ... [+] get face-to-face with customers. Glossier Beauty company Glossier has launched a new program aimed at helping its more than 200 full-time employees exercise the brand motto of “devoted to the customer.” How? By having them work a shift on the floor of one of its retail stores. The new program, which is now part of the brand’s systematic on-boarding process for new hires, allows team members to get up close and personal with customers. It’s also something Glossier is working toward having all of its employees who work at headquarters complete. The program was spurred by an idea from Ali Weiss, Glossier’s SVP of marketing. Weiss, who has been with Glossier for four years now, has been at every one of the company’s store openings—which so far includes long and short-term shops in Los Angeles, New York City, Boston, Miami and Seattle. “For me, being there in person is an exciting reminder of how all the days we spend thinking about customer experience are well-spent,” she said. “Being in our pop-up stores and long-term retail locations allows me to gather real-time customer insights across another important touchpoint and helps me look for themes and patterns in customer feedback.” The more Weiss thought about what an invaluable experience it was for her to be at the store’s different retail locations in person, the more she wanted her teammates to be able to see and experience the same for themselves. MORE FOR YOUAerie Looks To Double Sales To $2 Billion As Shoppers Ditch Victoria’s SecretMacy’s First Branch Store, Located In The Bronx, Turns 80 This YearIs Belk The Next Retail Domino To Fall? In September, she proposed the idea for the program as a company-wide effort. The Glossier team agreed it was a smart idea. As a result, Glossier’s full-time employees can now sign up to get matched with an in-store employee (which the brand calls “editors”—a nod to its editorial roots with Into the Gloss) to work a two-hour shift in one of the brand’s physical retail stores. During that time, the employee is able to interact with customers all while shadowing an in-store editor’s job, both on the back-of-store order fulfillment side and on the customer-facing front-of-store side. Weiss explained that the goal of the program is threefold: to give employees the chance to connect with Glossier customers in person, to offer logistical insights around their high-traffic retail stores, and to help team members build functional empathy for their fellow teammates and the brand’s customers. An added perk: It also gives employees the chance to see firsthand the results they’re helping produce day-to-day in their full-time roles at the brand’s headquarters, which drives overall employee engagement. Since launching the program two months ago, one-fourth of Glossier’s full-time team members have already completed a shift. They’ve also expanded the program to include an online customer experience component as well, which allows for a similar firsthand experience, but within Glossier’s online sales environment. “As a digital-first company, our physical retail presence is still fairly new, so it’s important to give our employees scalable insights from customers who are making pilgrimages to our physical stores,” Weiss said. Consumers think this is a smart approach as well. Jenna-Mae Bilmer, who works as a showroom associate at Knix, said she thinks it’s incredibly valuable for employees to have experience in as many different roles as possible so they can fully understand the customer. “These types of programs also show that a brand’s headquarters has a human element and that they’re making an effort to interact with the very people who’ve allowed them to grow as a company,” Bilmer said. The long-term vision for Glossier’s program is that it will not only help unite the brand’s two employee populations of both offline and online team members, but that it will also give the brand invaluable customer insights on a regular basis.
c32bcfe2fa2200ff784065155b413c6e
https://www.forbes.com/sites/kaleighmoore/2019/10/18/how-two-female-led-fashion-brands-are-mastering-responsible-production/
How Two Female-Led Fashion Brands Are Mastering Responsible Production
How Two Female-Led Fashion Brands Are Mastering Responsible Production Two female founders share their personal insights into responsible fashion manufacturing ahead of ... [+] Sustainability Day on October 24th. Getty A new report from McKinsey shows that among mass-market apparel brands, only 1% of new products introduced during the first two quarters of this year were sustainably made. It’s a surprising finding, considering the same report showed online searches for ‘sustainable fashion’ have tripled since 2016. While consumer demand for responsibly-made apparel is on the rise, it seems that most retailers are dragging their feet when it comes to sustainability efforts. Of course, there are retailers within that 1% that are pushing ahead with more responsible fashion production models. Everlane champions eco-friendly and transparent manufacturing, ASOS features a variety of ‘eco brands’, and GAP has ongoing efforts to use only sustainable cotton in production by 2021, to name a few examples. But smaller, more agile fashion brands seem to be finding success leaning into responsible production models. Ahead of Sustainability Day next week on October 24, I spoke with the founders of two female-led fashion brands to learn about how they’ve navigated the path to responsible production, as well as the why driving those efforts. For Kayti O'Connell Carr, founder of clothing brand MATE The Label, her experience navigating responsible manufacturing started with old-fashioned trial and error. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan As part of her research into potential production partners for her company’s clothing items, Carr went in person to visit a variety of different US-based vendors. In doing so, she was able to see for herself what factory employees dealt with on a daily basis, both in the factory and in the dye house. The more she learned about fabrics, processing, and production, the more her passion for responsible, eco-friendly practices grew—not just on the employee side, but on the consumer side, too. “I realized it's not just the consumer that is being exposed to harmful chemicals, but also hundreds of thousands of global garment workers,” she said. “I was shocked at how common it is for workers to be exposed to toxic chemicals and microfibers all day if they don't wear a mask.” As a result, Carr ended up working with a Los Angeles-based factory (the same one used by Everlane) that aligns with the brand’s responsible production values and quality standards. It’s more expensive, but Carr says she tries to look at the full cost from a footprint perspective and to remember that it’s a more environmentally responsible path at the end of the day. Today, 95% of MATE’s product is made within five miles of its headquarters, and the other 5% is made within a 10-mile radius. She explained that while it’s rare to have such a localized supply chain, it’s allowed her to consider the full life cycle of the brand’s products and how sustainability practices fit in at every step. For Nicola Harlem, founder of luxury clothing and outerwear company The Curated, sustainable production has been a priority from day one—and it’s proved a highly profitable path. The brand’s luxury clothing items (most of which fall in the $350 price range) are made using leftover fabric remnants from factories. While this means their quantities are extremely limited, the scarcity this approach produces means their items almost always sell out. Now having shifted to a pre-order model, The Curated maximizes production and quickly moves through product runs that are sold out before they are even finished being manufactured. At the same time, this approach means they produce no excess product, and can operate with zero waste. “Because our fabric is not a commonly used composition, there’s often very little of it left in stock—so if we do find it, we buy it and release a limited edition of colors,” Harlem explained. The strategy is paying off: In September, pre-orders for The Curated’s most recent batch of products resulted in $45,000 worth of pre-orders in the first hour alone. A year in, the brand has already seen revenue in excess of seven figures. Harlem sees this responsible approach to production also proves more ethical in that it encourages consumers to invest in quality staple pieces they can use for many years, rather than opting for cheaper, lower-quality items that are quickly cycled through. Industry experts like Mike Colarossi, Vice President of Product Line Management, Innovation, and Sustainability at Avery Dennison, say that more apparel retailers need to follow suit and look to sustainable production processes. “As the second-largest polluter in the world (behind only oil and gas), the apparel industry has an obligation to act,” he said. “Data consistently shows that sustainability is simply good business.” He went on to say that he believes adopting sustainable production practices will help ensure the longevity of companies in a highly competitive fashion market by allowing them to address a growing consumer demand.
dd00b1c311d5ac610bf29b493faea58c
https://www.forbes.com/sites/kaleighmoore/2020/03/08/today-is-international-womens-day-but-female-retailers-angel-investors-and-entrepreneurs-support-the-cause-year-round/
Today Is International Women’s Day, But Female Retailers, Angel Investors, And Entrepreneurs Support The Cause Year-Round
Today Is International Women’s Day, But Female Retailers, Angel Investors, And Entrepreneurs Support The Cause Year-Round March 8th is International Women's Day, but female retailers, VCs, and entrepreneurs are working to ... [+] support fellow women year-round. Getty Today is International Women’s Day (IWD), which aims to inspire action and shared ownership around gender parity. “Celebrating women's achievements and increasing visibility while calling out inequality is key,” the IWD website says. In the realms of retail, venture capital, and entrepreneurship, women are working on these efforts not just on this one day—but year-round. I spoke to a few to find out what they’re doing to empower fellow women. Leveraging Expertise to Educate and Mentor Maria Hatzistefanis, founder of Rodial Rodial Maria Hatzistefanis, the London-based CEO and founder of international beauty brand Rodial, supports fellow female entrepreneurs through mentoring and education. Aside from working with organizations like the British Fashion Council to support and promote young and emerging talent, she’s also put her business expertise into a book called How to Make it Happen, which offers strategies and tips to entrepreneurs working to launch new businesses. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan “Success is never straightforward, and I know how hard it is to find your motivation and keep going—especially in the face of self-doubt, rejection, and unexpected setbacks,” she said. “I wanted to create a guide using examples from my own journey with an emphasis on how to learn from mistakes and turn challenges into success.” She said that because we live in a world of instant gratification and idolization of overnight success (both of which are illusions), she wanted to be honest about her own entrepreneurial journey and talk about the challenges, mistakes, and changes in direction that she experienced while building a successful international company. Her hope is that honesty and transparency around her own retail journey will help a new generation of female entrepreneurs understand that everyone is facing similar challenges—and that they’re not alone. Focusing on Company-Wide Diversity & Inclusion Brianne Kimmel of Work Life Ventures Work Life Ventures Brianne Kimmel, the angel investor behind early stage venture firm Work Life, is focusing her efforts on an important issue that’s often missing from conversations surrounding investment in the future of work: Diversity and inclusion. As part of those efforts, Kimmel meets with every new female hire within her portfolio companies—and while this is becoming increasingly non-scalable, she believes this high-touch, hands-on aspect of her fund is essential to ensure partner companies are thoughtful about diversity and inclusion from the very beginning. “Because I have a very focused fund, it’s important to maintain this community and ensure that people know their voices are heard,” she said in conversation with Kate Clark of TechCrunch. “I’m very mindful that I’m a female General Partner—and I feel proud to have that title.” Supporting and Investing in Female-Led Businesses Coco Meers of Equilibria and Rebelle Collective Coco Meers Before Coco Meers started PrettyQuick, a SaaS company that was eventually acquired by Groupon, she was an International Brand Manager for retail beauty brand L’Oreal. She left that role in 2009 to get her MBA from the University of Chicago Booth School of Business, and today she’s the co-founder of Equilibria, a female-focused CBD company with a service-based dosage component. But that’s not all: She’s also an active angel investor and advisor focusing on digitally-enabled consumer companies (many of which are female-led) through her fund Rebelle Collective. She feels lucky to have had entrepreneurial women support her along her journey so far in the form of advice, mentorship, and capital—and believes that other women should have that same opportunity. “Across early and late stages, there’s a glaring gender gap in business leadership: Less than 3% of women-founded companies make it to $1 million in sales, and women receive less than 3% of the billions deployed in venture funding,” she said. “To bridge this gap, it’s our duty to reach back and pull other women forward—by writing checks, advising, acquiring, or helping them navigate their public exit. We need more women supporting women across the value chain.” Empowering At-Risk Women Through Product Lorin Van Zandt of Missio Missio Missio co-founder Lorin Van Zandt has worked with women emerging from human trafficking for years—which inspired her to create a hair care product line that serves as a vehicle to educate stylists and consumers on the issue. Along with education on victim identification, the brand also mobilizes stylists to advocate and serve women who’ve been involved in human trafficking while donating a portion of all product sales to non-profit partners focused on the issue. “Having worked for 10 years with various non-profits, we know firsthand the challenges these organizations face when it comes to the important work they do,” Van Zandt said. “As our company grows, our goal is to reduce the financial strain these organizations face on a yearly basis with gifts that meet their needs.” While these are just a few examples of successful female entrepreneurs who are making strides when it comes to supporting fellow women, there are many more who are doing this important work as well. In 2020, let’s hope this trend continues to grow.
d43fb835413addf1e70f3e302d553516
https://www.forbes.com/sites/kaleighmoore/2020/04/28/retailers-deploy-pre-order-and-backorder-models-to-circumvent-supply-chain-delays/
Retailers Deploy Pre-Order And Backorder Models To Circumvent Supply Chain Delays
Retailers Deploy Pre-Order And Backorder Models To Circumvent Supply Chain Delays As retailers across the US work to solve supply chain issues related to the delays and ripple effects of COVID-19, some are turning to pre-orders and backorder models as stop-gap solutions. These models allow retailers to continue to sell and generate revenue while keeping customers informed that fulfillment will take a bit longer than usual. As a recent report from McKinsey stated: “Open communication with customers can fill in at least some gaps [as retailers work] to navigate uncertain and ever-evolving conditions successfully.” However, especially for direct-to-consumer retailers, there are some important issues to consider around the pre-order model before deploying it online. Helena Price Hambrecht, co-founder of direct-to-consumer alcohol brand Haus, says retailers should only take pre-orders for what they can deliver within a concrete time frame—and customers should be well-informed about that date. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan This is one way brands can build a wait list and continue to meet demand, albeit on a slightly delayed timetable. If you’re not certain when you’ll be able to ship the product, however, Hambrecht says you shouldn’t continue to sell. “If you cannot confidently share a delivery date with the customer, you are sold out,” she said. “Don’t sell what you can’t deliver.” With this in mind, Hambrecht herself deployed the pre-order model within her business for a collaboration called The Restaurant Project. Haus has teamed up with restaurants across the US to develop custom, limited-edition flavors that reflect the menus and individual spirit of each partner restaurant. Pre-orders for items from The Restaurant Project are expected to ship in May. Haus 100% of the profits from these pre-orders go to the restaurants, enabling them to support employees and cover costs during COVID-19. “Pre-order a product from your favorite restaurant and we pay the restaurant immediately, so your purchase has an immediate impact,” the Haus website states. “We’ll send your bottle when it’s ready in May and keep you posted every step of the way.” With nine collaborations available for pre-order on the Haus website, this model is already paying off: They shared that they’ve written more than $50,000 in checks to their restaurant partners so far. This approach isn’t just for brands launching new collaborations, either. In another retail vertical, direct-to-consumer kitchenware brand Caraway is putting the backorder model to work during the COVID-19 crisis as they work to ensure the health and safety of their supply chain partners. With clear communication on product pages for backordered items like their cookware set, they make it clear when customers can expect to receive their orders. Caraway includes details on expected ship date on its product pages. Caraway So far, they shared that they’re not seeing a dip in sales with this model compared to when they had active stock four to five weeks ago—demand has actually increased. “Our customers have been incredibly understanding, and most are purchasing in the mindset that their new cookware is an investment they intend to treasure—so a few extra weeks is insignificant compared to the end value,” said Kaleel Munroe, Caraway’s head of brand marketing. Right now, Caraway’s website states that backordered items are expected to ship on May 20th—again maintaining clarity for shoppers around when their items will arrive. So what does a retail expert say about using pre-order and backorder models? If you ask Neil Saunders, Managing Director of Retail at GlobalData, pre-order and backorder models work reasonably well for brands selling compelling products consumers want and are willing to wait for. “At present, when most of us are on lockdown and unable to go out, waiting can be less of an issue,” he said. “Regardless of the situation, shoppers need clarity around when the product will be back in stock and when it will be shipped.” As retailers scramble to figure out workarounds for supply chain interruptions right now, we may see the pre-order and backorder model continue to grow in popularity for those who have clarity on when their inventory issues will be resolved. For those who don’t know when they’ll be able to replenish sold-out inventory, deploying these models may damage customer relationships.
9a8a3d9833cc03ae27403bd557d693d1
https://www.forbes.com/sites/kaleighmoore/2020/06/08/on-world-oceans-day-2020-how-is-the-fashion-industry-taking-action/
On World Ocean’s Day 2020, How Is The Fashion Industry Taking Action?
On World Ocean’s Day 2020, How Is The Fashion Industry Taking Action? Today is World Oceans Day, but select apparel brands are taking action year-round to reduce ocean ... [+] waste. Getty While sustainability may not be a top-of-mind issue for consumers at the present moment, some fashion and apparel brands are continuing to lean into their efforts aimed at reducing ocean waste despite that fact—which is relevant today on World Oceans Day. The reality is: more than 8 million metric tons of plastics enter the oceans each year—on top of the 150 million metric tons that are currently circulating marine environments. This is a pressing issue when you consider scientists have found that the plastic waste we can see and measure only accounts for a small percentage of the total amount of plastics that enter the ocean. Consumer demand is also causing apparel brands to shift their focus to more responsibly-made products. The US market for sustainably-made products is projected to reach $150 billion in sales by 2021, according to Nielsen. Traackr data shows that discussions around sustainable fashion are on the rise as well: mentions of sustainable fashion among influencers have increased 55% in recent years, bringing audience engagement along with it. As such, several fashion brands are taking steps to incorporate recycled materials pulled from the ocean into their production models in various ways. Brands taking action to reduce ocean pollution In the swimwear vertical, this year L*Space introduced a new eco-friendly line of swimwear leveraging materials like Econyl and Repreve that are made from non-virgin materials like fishnets and recycled ocean waste, which they plan to expand further. MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan L*Space's eco-chic line L*Space Founder Monica Wise said that while she knows the brand still has a long way to go to become truly sustainable, it all starts somewhere—and producing less waste is a good starting point. As such, they’re also introducing new packaging made from recycled materials. Similiarly in 2020, Body Glove also released a eco-consious swimwear line in collaboration with professional surfer and 2021 Olympic qualifier Tatiana Weston-Webb that’s made from recycled materials. Activewear brand Wolven manufactures the majority of its apparel with OEKO-TEX certified recycled fabric (made from recycled plastic bottles) and touts that each pair of leggings sold helps remove one pound of plastic from the ocean—which is part of the brand’s transparency around its sustainability efforts. Wolven's leggings remove a pound of plastic from the ocean. Wolven Recently named Climate Neutral Certified, they also promote that they have a smaller carbon footprint because their garments are ethically produced in Asia (where the fabrics are made.) In May of 2020, clothing company Desert Dreamer rolled out a new line called ‘Revive’ with items made with yarn composed of a combination of recycled cotton and recycled polyester. In their case, they plan to use their influence as one of the leading brands sold in PacSun to encourage more sustainable practices with other retail partners. Adidas Parley collection Adidas And in the footwear vertical, Adidas has its Parley line with shoes made from recycled ocean plastics. In 2020, they project that they’ll manufacture between 15-20 million pairs of shoes made with ocean plastics, up from just 11 million in 2019. Insights from sustainability experts So what do experts within the sustainability space have to say about brands taking action to reduce ocean waste on World Ocean’s Day (and year-round)? Andrea Kennedy, assistant professor at LIM College, says that first, it’s important to understand a distinction: The plastic used in textiles is actually often beach plastic (rather than ocean plastic.) Plastic that has been in salt water for extended periods becomes very brittle, and thus is harder to break down and re-spin. “The goal, then, should be to keep more plastic from entering our oceans and thus further degrading them—but combing beaches and getting plastics that have washed up is a first step,” she said. “In general, it’s far better to use ocean plastic to create new synthetic fibers for apparel rather than creating new synthetic fibers, as these are derivatives of oil and petroleum.” If you ask journalist Jasmin Chua, who focuses on the fashion industry’s environmental impacts, she says that using recycled ocean plastics is a good idea—but that we also need to ask critical questions of brands using ocean plastic in their marketing, as recycled plastic still sheds microfibers that enter oceans. “Ocean plastic isn’t a perfect solution—consuming less overall is still our best way forward,” she said. “Nothing purchased new is completely guilt-free.”
3548ad543b1300036168db5f300ece86
https://www.forbes.com/sites/kaleighmoore/2020/11/15/america-recycles-day-2020-how-are--fashion-brands-making-progress/?sh=200f38e25da1
America Recycles Day 2020: How Are Fashion Brands Making Progress?
America Recycles Day 2020: How Are Fashion Brands Making Progress? getty Today marks the only nationally-recognized recycling day within the US, dubbed the “America Recycles Day.” With the goal of encouraging more mindful consumption and responsible recycling, this national day of awareness works to address growing consumer concerns around long-term sustainability and the eco-friendliness of everyday items. There’s clear financial incentive for brands to undertake these effort: IBM survey data indicates that 69% of environmentally-conscious buyers willingly pay a premium for recycled products. That’s good news, considering there’s still plenty of room for progress—especially when it comes to circular fashion. If you ask Alden Wicker, a sustainable fashion journalist and founder of EcoCult, the fashion industry has a long way to go when it comes to leveraging recycled materials and more responsible production. “Brands do a lot of talking, but few back their marketing up with hard numbers on progress made,” she said. “We need industry-wide regulation. Voluntary action based on consumer sentiment isn’t working.” MORE FOR YOUWhy Retail Key To Survival May Be A Hub Model Where The Consumer Is The PilotFrom Silver Bullets To Strategic Overhauls: 9 Retail Trends, Tactics & Innovations For Success In 2021The Best Running Clothes To Smash Your Couch To 5k Plan This call to action ties into one of the three main pillars of America Recycles Day, which encourages brands to create products made with recycled materials. Within the apparel sector, select footwear brands are stepping up to this particular challenge and finding inventive ways to leverage more circular manufacturing methods. Footwear brand Avre, for example, integrates recycled plastic materials into its shoe manufacturing process to prevent those items from entering landfills and oceans. Other footwear brands are investing significant R&D into creating performance-level products using circular materials. Footwear brand Veja spent five years developing its Condor running shoe—the second iteration of which is made from 57% bio-based and recycled materials. Veja Styles created in partnership with Rick Owens are produced in southern Brazil: The upper 3D knit of the shoe is made from 100% recycled plastic bottles while the sole is 46% sugar cane, 8% banana oil, and 3% natural cork combined with 30% Amazonian rubber and 31% rice waste. Also in the running shoe vertical, HOKA ONE ONE recently introduced its Challenger ATR 6, which is made using recycled Unifi Repreve yarn in the primary and collar meshes—a material derived from post-consumer plastic waste. Salomon is also currently working on its 100% recyclable running shoe: The Index.01. Salomon Set to release in Spring 2021, the Index.01 has a sole made from a nitrogen-infused TPU-based foam called Inifiniride, which can be ground into tiny pieces and recycled when the shoe reaches the end of its life. To recycle these shoes, owners simply send their used shoes to the closest collection center via a prepaid shipping label from the brand where they are then washed, dissembled, and recycled. Long-time players in this space are leaning in as well, despite the uncertainty posed by the ups and downs of 2020 thus far. For example: Footwear sustainability pioneer ALDO is deepening its commitment to sustainable production, recently releasing its first-ever sustainable sneaker called the RPPL. RPPL sneakers are made from recycled plastic bottle yarn and lake algae, while the sole is formulated with BLOOM foam, a low-carbon material derived from lake algae biomass. From a consumer perspective, these recycling-focused production efforts are a step in the right direction—but quantifiable reporting on outcomes of these efforts and greater transparency will be crucial as shoppers look more deeply into eco-friendly claims. Consumers have to step up and do their part as well—which means accepting that products made from recycled materials often cost more. “Are shoppers willing to pay for the higher price tag that comes with sustainable goods?” asked Wicker. “I think they would be if they could know with certainty what sustainability means, which items are more sustainable, and by how much.”
a952636cf1b1d094092317ad7cf1e11c
https://www.forbes.com/sites/kalevleetaru/2015/09/20/wireds-apple-news-experiment-what-it-says-about-the-future-of-journalism/
Wired's Apple News Experiment: What It Says About The Future Of Journalism
Wired's Apple News Experiment: What It Says About The Future Of Journalism The silhouette of an attendee is seen using an Apple Inc. iPhone after a product announcement in San... [+] Francisco, California, U.S., on Wednesday, Sept. 9, 2015. Photographer: David Paul Morris/Bloomberg On Friday Wired published an article about Bjarke Ingels, the Danish architect whose company was selected to design Two World Trade Center. What would have been an otherwise ordinary Wired article has generated controversy because Wired chose to make this particular article available exclusively to Apple News users for four days, becoming available to regular readers only this coming Tuesday. Ordinary web users trying to access the article until then are met with the message “This story is being previewed exclusively on Apple News until Tuesday, September 22nd. Please check this page again at that time. To view this story in the Apple News app on your iOS 9 device, follow this link…” Yet, Wired’s experiment with an Apple News exclusive is more than a simple marketing ploy, it has profound implications for the future of how we access news online and the increasingly fragmented and algorithmically-controlled future that will determine what we know about the world around us. Rewind the clock to a Sunday afternoon 25 years ago in 1990. Wishing to catch up on all of the latest developments before the start of the new work week, you head out to the local bookstore or newsstand, where you peruse shelves upon shelves of periodicals and newspapers touting the latest in-depth technology news, business interviews, sporting scores, local events, food and travel reviews, international news, and more. If you wanted all of the above, you would be leaving with an armload’s worth of reading material costing you potentially a hundred dollars or more. Fast forward to 2015 and you can consume all of that and more from your smartphone while walking down the street, full text search the entirety of it, share your thoughts about each article with someone from the other side of the globe, and read almost all of it without having to pay a penny. Nearly from the dawn of the modern web browser news websites began experimenting with the new online medium – by 1994 America Online was already maintaining a top 10 list of online news websites and the UK’s first online newspaper went live in November of that year. These early sites offered more and more of the content from their print editions, but instantly accessible from anywhere in the world for free, forging the mindset we hold today of news being free rather than a precious commodity we have to pay for. For over two decades now the news industry has embraced the open and free ecosystem of the web, publishing a large fraction of their content for anyone in the world to consume for free, and making money from selling advertisements around the margins of each page. In fact, by 2010 nearly half of the world’s news content was accessible via the web as measured through BBC Monitoring sourcing. Percent worldwide news accessible online in 2010 as estimated through BBC Monitoring sourcing... [+] (Credit: Kalev Leetaru) The debut of Google News in 2005 suddenly made it possible to full text search all of this news material as it was published, creating an index of the world’s online news like search engines had done for the general-purpose web. The news media seemed solidly on a path to openness, where all news would be free, searchable, and openly accessible to anyone with a web browser. Yet, the web today is undergoing a fundamental transition in which it is fragmenting into privately-held walled gardens detached and inaccessible from the open internet. According to the CTO of the CIA, as of 2013 Facebook held more than 35% of the world’s photographs and added half a petabyte of data each day, much of it accessible only within Facebook and sealed off from the open web and search engines like Google. Increasingly, these platforms are fighting to bring the free and open news world within their closed walls. By 2013, nearly 30% of the American population used Facebook to find news coverage, while in 2015 two thirds of online users in the US get their political news from Facebook and the same percentage of Facebook and Twitter users use the platforms as news outlets. News shared on Twitter emphasizes “national government and politics … international affairs … business … [and] sports” significantly more than news shared on Facebook, reflecting a divide in the kinds of topics users of the two platforms are exposed to. Within these walled gardens articles are presented to users through massively complex algorithms that operate without any visibility into the decisions they make and why. In some cases company scientists purposely manipulate those algorithms to study users without consent like a giant colony of lab rats. In other cases, the underlying data used to surface articles are based on a bias away from negative news – in the words of Facebook founder Mark Zuckerberg, adding the ability to dislike news is “not something that we [Facebook] think is good for the world. So we’re not going to build that.”  The platform where 30% of the American population turned to news in 2013 has unilaterally determined that indicating dislike for a story is not something it thinks is “is good for the world” so it does not make it available. In turn, its algorithms are unable to take such information into account. (Though it should be noted that Facebook may finally be rolling out the first steps towards such capability.) The impact of Facebook’s bias towards “happy” news was on full display last August as Twitter was filled to the brim with images of the violent unrest in Ferguson, while Facebook was a bastion of happy images of people pouring buckets of ice water over their heads. Today you can still turn to Twitter, Google News, or newspapers’ own websites to learn about Ferguson even if Facebook's algorithms minimize its visibility, because Facebook merely links to the articles rather than exclusively hosting them.  In fact, Facebook has gone out of its way to reassure publishers using its new Instant Articles service that they are free to publish their articles elsewhere at the same time. Yet, in a world where Wired takes a first step into offering content exclusively through one platform, where it is impossible to access that material through any other means, how long will it be before publishers begin to do the same with Facebook and other platforms and let them decide who should see it?   On the one hand, Wired’s Apple News exclusive can be seen as simply a marketing gimmick designed to test out new content features of iOS 9. On the other hand, it represents a continued march towards a reversal of 20 years of universal accessibility of news and placing that content not back in the hands of journalists, but in the hands of technology companies not bound by the norms of journalism. Imagine a world in which all news is served through a single technology company and the only news you are allowed to read about what is happening around the world is determined by their algorithms based on what their charismatic founder thinks is “good for the world” and in your best interests to be allowed to see and with user interfaces designed to emphasize happy and cheerful news over strife and sadness. That’s the world we’re heading towards, a world that would make George Orwell proud.
9e2342d8f214249ae38ead190a4e172b
https://www.forbes.com/sites/kalevleetaru/2015/10/05/mapping-the-global-flow-of-refugees-through-news-coverage/
Mapping The Global Flow of Refugees Through News Coverage
Mapping The Global Flow of Refugees Through News Coverage A Syrian refugee man carrying his daughter rushes to the beach as he arrives on a dinghy from the... [+] Turkish coast to the northeastern Greek island of Lesbos, Sunday, Oct. 4 , 2015. The U.N. refugee agency is reporting a “noticeable drop” this week in arrivals of refugees by sea into Greece - as the total figure for the year nears the 400,000 mark. Overall, the UNHCR estimates 396,500 people have entered Greece via the Mediterranean this year with seventy percent of them are from war-torn Syria. (AP Photo/Muhammed Muheisen) This past June the United Nations High Commissioner for Refugees (UNHCR) released its annual report on global refugee trends, concluding that in 2014 more than 60 million people worldwide had been forced from their homes – the highest number ever recorded. The plight of refugees has become an increasing foreign policy issue, from the campaign trail in the United States to the political halls of Europe to the streets of South Africa. Most global statistics on refugees are published by UNHCR and are based on its own data and reporting by governments. Such reporting, however, is a slow and manually-intensive process. Could the world’s news media offer a new approach to tracking global refugee flows, potentially even in near-realtime? The research staff at Banco Bilbao Vizcaya Argentaria (BBVA)’s Cross-Country Emerging Markets Unit recently published a report “The Refugee Crisis: Challenges for Europe” that used global news media coverage from my GDELT Project to track the flow of refugees throughout North Africa, the Middle East, and Western Europe. The animation below, from their report, shows refugee inflows (orange) and outflows (red) reported in news coverage from January 14 to June 15, 2015. Media citations of refugee inflows (orange) and outflows (red) (Credit: BBVA Research) Turkey, which features prominently on the map above as a net inflow country, is home to 45% of all Syrian refugees in the region, while Iran plays host to one of the largest and longest-staying refugee populations in the world. Lebanon’s role in absorbing nearly 1.2 million Syrian refugees is visible as a stark orange outlier in a sea of red from neighboring Syria. The more complicated story in Sudan reflects both internal refugee camps housing inflows from other portions of the country and the significant outflows to neighboring countries. Yet, such a map paints only part of the picture – the more important question logistically is the network of transit corridors capturing the points of origin and destination for refugees as they move between countries. With the release of its June report, the UNHCR released a series of visualizations capturing the major origin-destination pathways of refugees in 2014. Those visualizations are based on the UNHCR’s own refugee processing and reports from governmental agencies. While they are likely to be highly accurate, producing them requires enormous data collection regimes and thus they are very slow to update and cannot easily be adapted to map other types of flows, like illegal wildlife products. Using my GDELT Project, which monitors local media around the world and live-translates it from 65 languages, along with Google ’s BigQuery system and Gephi’s visualization software, the network visualization below offers a glimpse through the more than 343,000 articles relating to refugees monitored by GDELT from February 19 to July 21 of this year. Each article was scanned for any mention of a geographic location, with countries connected together based on how many articles they were mentioned in together. The final visualization below shows the countries mentioned most frequently together in coverage about refugees, with the thickness of the lines between them reflecting how often each pair of countries were mentioned together in that context. Countries that appear more frequently with each other than with other countries are colored the same color, capturing the distinct clusters of potential transit corridors. Network diagram of countries appearing most frequently with each other in global news coverage of... [+] refugees (Credit: Kalev Leetaru) At the center of the network stands Syria, “the world’s single largest driver of displacement” as of 2014. Italy occupies a unique role in the network, with connections from Syria to other major European countries, but also a strong connection from Syria to Italy and then from Italy to the other European nations. This reflects that Italy acts as a central hub for refugees crossing the Mediterranean, before they transit to other European countries. Nigeria is tightly connected with Niger, Chad, and Cameroon, which together housed more than 200,000 Nigerian refugees as of May 2015. Australia forms its own cluster with South East Asia, reflecting the source of many of its refugees. It is also strongly connected to the small island of Nauru, which is home to a key immigration detention center. A smaller cluster of Malaysia, Bangladesh, Thailand, Indonesia, and Burma captures the “game of human ping pong” of boat migrants fleeing from or attempting to migrate between the countries. Though the network reflects only which countries are mentioned alongside each other, rather than any deeper semantic understanding of how they are connected, the diagram above seems to reflect fairly well the major known transit corridors. Rearranging the network above to place the countries geographically on a map, the regional grouping of refugee flows becomes clear. Compared with UNHCR’s network diagrams, the map below shows much stronger regional clustering. Latin America is not represented at all, other than a minor link between Mexico and the United States. UNHCR data also shows very little refugee flow to or from the continent, making it an outlier. Geographic network diagram of countries appearing most frequently with each other in global news... [+] coverage of refugees (Credit: Kalev Leetaru) While not a perfect match for the observational and government-reported data of UNHCR, this approach of mapping refugee flows through the co-occurrence of country names in media reporting of refugees nonetheless appears to capture many of the major known transit routes for refugees. Most importantly, its reliance on open news data, rather than government and NGO reporting data, means it can be updated in near-realtime and adapted to a variety of other topics like wildlife crime or human trafficking. I would like to thank Google for the use of Google Cloud resources including BigQuery and thank BBVA for their refugee map.
daac84ea54832321895d548e8062dc10
https://www.forbes.com/sites/kalevleetaru/2016/10/12/geofeedia-is-just-the-tip-of-the-iceberg-the-era-of-social-surveillence/
Geofeedia Is Just The Tip Of The Iceberg: The Era Of Social Surveillence
Geofeedia Is Just The Tip Of The Iceberg: The Era Of Social Surveillence Rio de Janeiro's Integrated Command and Control Centre (CICC), which combines social media... [+] monitoring, traffic cameras and other data streams in a glimpse of the digital future of realtime monitoring. (YASUYOSHI CHIBA/AFP/Getty Images) Headlines swirled yesterday with the release of an ACLU report that showed American law enforcement using social media monitoring firm Geofeedia’s services to monitor protest activity. In the ensuing aftermath, Facebook, Instagram and then Twitter all suspended Geofeedia’s access to their data feeds. Yet, lost in all this breathless tech press hyperbole is the fact that Geofeedia is just one of a myriad companies offering these services to law enforcement, military and intelligence agencies both in the United States and abroad. While Geofeedia happened to be the one whose contracts were surfaced by the ACLU, the use of social media monitoring, including for real-time protest response and profiling of individuals, is fully entrenched in the modern surveillance state. Perhaps the most intriguing, but little discussed, element of yesterday’s story is that both Facebook and Twitter issued public statements claiming to be completely and utterly unaware of what one of their licensees had been doing with their data over a period of several years. For example, Twitter issued a public statement announcing “Based on information in the @ACLU’s report, we are immediately suspending @Geofeedia’s commercial access to Twitter data,” however it did not respond to a request for further comment. A Facebook spokesperson responded by email with the statement “We terminated Geofeedia’s access to Instagram’s API and the Topic Feed API because it was using these APIs in ways that exceeded the purposes for which they were provided” and said “the Topic Feed API is designed specifically for media and brand purposes.” However, the company did not respond when asked to clarify what was meant by one of the Geofeedia emails surfaced by the ACLU that said “we recently entered a confidential legally binding agreement with Facebook. Over time, Facebook will be reactivating more and more data to Geofeedia throughout our partnership.” Both Twitter and Facebook did not respond to repeated requests for comment as to whether they would be terminating access to their data to all other companies that use their data to provide surveillance capabilities to law enforcement. Therein lies the most fascinating element of this story – it is difficult to imagine that neither Facebook nor Twitter had any idea of any kind that one of their licensees was using their data to provide surveillance capabilities to law enforcement. Geofeedia is far from a James Bond cloak-and-dagger defense contractor operating in the shadows – it is a widely known commercial company widely and openly touting its capabilities through numerous case studies and receiving considerable media coverage that specifically discussed its law enforcement clients, while the FBI openly issued an RFP for its services as part of its larger interest in social monitoring. In short, even the most basic of web searches easily turned up that law enforcement was a client of GeoFeedia’s and this thus raises the question of how Twitter and Facebook never noticed that a high-profile subscriber of their services, especially one that allegedly was signing additional special licensing agreements with Facebook, would never have appeared on their radars. Yet, it is Twitter and Facebook’s massive data ecosystems that make this scenario entirely possible. Both companies make their data available to a myriad of companies that offer brand, topic and other monitoring services to innumerous clients. While Twitter and Facebook made a big splash with their very public suspension of Geofeedia, this will have little impact on the increasing use of social media for surveillance, as law enforcement will simply switch to any of the myriad other companies that provide these services. While the suspension may temporarily inconvenience the police departments who contracted with Geofeedia and may make it more difficult for smaller police departments to access this technology, larger departments and the federal government already widely deploy such monitoring systems. Stopping law enforcement use of social media is simply impossible in that there are so many monitoring companies out there and the government has any number of myriad contractors and shell companies it can contract services through. In short, the ACLU was able to turn up Geofeedia’s contracts because police departments purchased its services directly – in future they are more likely to use third party contractors who in turn purchase monitoring services through yet other companies, providing a much more impenetrable shroud around their monitoring access. After all, the US Government has many ways of purchasing services that don’t include “US Government” on the signature line. Short of carefully auditing every single data request from every single user of every single authorized social media monitoring company and looking for patterns suggestive of surveillance, there is simply no way that social media platforms can stop their data from being used for these purposes. In fact, I’ve spoken with a number of entities whose access to a company's social data streams was curtailed for one reason or another and they simply switched to a different monitoring company that provided the same data and service. In short, trying to stop unauthorized use cases of social media data is simply a giant unwinnable game of whack-a-mole. Reinforcing this theme, there is not a big data meeting or contractor expo day I’ve attended in DC that has not included at least one company offering social media surveillance capabilities extremely similar to Geofeedia’s to the law enforcement, intelligence and military communities, with most of them specifically mentioning protest triaging and agitator profiling as key focal areas. The academic community has also focused extensively in this area, both via DOD directly and through other federal funds, including a lot of research on profiling individuals through social media, building psychological profiles or estimating sensitive attributes like sexual preference or political views. Many of these approaches are readily integrated back into the government either through university commercialization efforts or through contractors seeing a paper and marketing their own implementation. A quick web and literature search turns up countless monitoring firms and contractors publicly touting their social surveillance capabilities or their government contracts. Harvard faculty startup Crimson Hexagon, for example, is used by the State Department for counter-terrorism monitoring and its founder has been cited in the New York Times as presenting on the company’s tools at CIA headquarters, while last year the US Army issued an RFP for the company’s services. When asked whether it makes its services available to law enforcement, intelligence or military clients, including those of foreign governments and whether its tools are capable of the same surveillance services offered by GeoFeedia and whether the company had processes in place to prevent such use, a company spokesperson emailed that it was declining to provide a response. When asked about the US Army RFP, the company responded that it does not comment on “our customers and potential customers.” Putting this all together, the bottom line is that the tech press’ portrayal of Geofeedia as an isolated case of social monitoring gone wrong could not be further from the truth and that the massive data ecosystems provided by the major social media platforms make it impossible for them to prevent this kind of social surveillance. Welcome to the brave new world of 1984.
15be69bbe8ae92211db3b6ad1ad52f5b
https://www.forbes.com/sites/kalevleetaru/2016/10/31/the-dyn-ddos-attack-and-the-changing-balance-of-online-cyber-power/
The Dyn DDOS Attack And The Changing Balance Of Online Cyber Power
The Dyn DDOS Attack And The Changing Balance Of Online Cyber Power A worker is silhouetted against a computer display showing a live visualization of the online... [+] phishing and fraudulent phone calls across China during the 4th China Internet Security Conference (ISC) in Beijing. (AP/Ng Han Guan) As the denial of service (DDOS) attack against Dyn shook the internet a little over a week ago, it brought to the public forefront the changing dynamics of power in the online world. In the kinetic world of the past, the nation state equivalent was all-powerful, since it alone could raise the funds necessary to support the massive military and police forces necessary to command societies. In the online world, however, the “armies” being commanded are increasingly used against their will, massive networks of infected drone machines formed into botnets. The cost of acquiring, powering, cooling, connecting and operating these virtual soldiers are borne by private individuals and corporations, with criminal enterprises able to co-opt them into massive attack botnets. What does this suggest is in store for the future of the online world? The notion of using large botnets to launch globally distributed DDOS attacks is by no means a new concept and in fact has become a hallmark of the modern web. Indeed, I remember as a freshman in college 16 years ago seeing a new Linux server installed where I worked one morning and seeing the same machine being carted off by the security staff that afternoon after it had been hacked and converted into a botnet drone just a few hours after being plugged in. What makes the attack against Dyn so interesting is the scale at which it occurred and its reliance on compromised Internet of Things devices, including DVRs and webcams, allowing it to command a vastly larger and more distributed range of IP addresses than typical attacks. Making the attack even more interesting is the fact that it appears to have relied on open sourced attack software that makes it possible for even basic script kiddies to launch incredibly powerful attacks with little knowledge of the underlying processes. This suggests an immense rebalancing in the digital era in which anyone anywhere in the world, all the way down to a skilled teenager in his or her parent’s basement in a rural village somewhere in a remote corner of the world, can take down some of the web’s most visible companies and wreak havoc on the online world. That preliminary assessments suggest that the attack was carried out by private actors rather than a nation state only reinforces this shift in online power. Warfare as a whole is shifting, with conflict transforming from nations attacking nations in clearly defined and declared geographic battlespaces to ephemeral flagless organizations waging endless global irregular warfare. In the cyber domain, as the battleground of the future increasingly places individuals and corporations in the cross hairs, this raises the fascinating question of how they can protect themselves? In particular, the attack against Dyn largely mirrored an attack against Brian Krebs’ Krebs on Security blog last month, which raises the specter of criminals and nations being able to increasingly silence their critics, extort businesses and wreak havoc on the online world, perhaps even at pivotal moments like during an election day. In the physical world, the nation state offers protection over the physical assets of companies operating in its territories, with military and police forces ensuring the sanctity of warehouses, office buildings and other tangible assets. However, in the digital world, state hackers from one country can easily compromise and knock offline the ecommerce sites of companies in other nations or leak their most vital secrets to the world. In the case of Brian Krebs’ site, his story thankfully has a happy ending, in which Alphabet's Jigsaw (formerly Google Ideas) took over hosting of his site under their Project Shield program. Project Shield leverages Google’s massive global infrastructure to provide free hosting for journalistic sites under sustained digital attack, protecting them from repressive governments and criminal enterprises attempting to silence their online voices. Looking to the future, what options do companies have to protect themselves in an increasingly hostile digital world? Programs such as the Project on Active Defense by George Washington University’s Center for Cyber & Homeland Security are exploring the gray space of proactive countering and highly active response to cyberattacks. For example, what legal and ethical rights does a company have to try and stop an incoming cyberattack? Can it “hack back” and disable key command and control machines in a botnet or take other active approaches to disrupt the incoming traffic? What happens if a company remotely hacks into a control machine to disable it and it turns out it is an infected internet-connected oven in someone’s house and in the process of disabling it, the oven malfunctions and turns to maximum heat and eventually catches fire and burns the house down? Is the company responsible for the damage and potential loss of life? What legal responsibilities and liabilities do device manufacturers have to develop a more secure Internet of Things? If a company in 2016 still sells devices with default administrative passwords and well-known vulnerabilities that make them easy prey for botnets, should the companies bear the same burden as any other consumer safety issue? As over-the-air remote security updates become more common, should legislation be passed to require all consumer devices have the ability to be remotely updated with security patches? As the modern web celebrates more than 20 years of existence, somewhere over those last two decades the web has gone from a utopia of sharing and construction of a brighter future to a dystopia of destruction and unbridled censorship. Will the web grow up and mature to a brighter security future or will it descend into chaos with internet users fleeing to a few walled gardens like Facebook that become the “safe” version of the web? Only time will tell.
c6c1636205623fe591478d7ff0edb79d
https://www.forbes.com/sites/kalevleetaru/2017/01/13/when-cybersecurity-meets-physical-security/
When Cybersecurity Meets Physical Security
When Cybersecurity Meets Physical Security A display at the Big Bang Data exhibition at Somerset House which captures wireless data from the... [+] surrounding environment and displays it. (Peter Macdiarmid/Getty Images for Somerset House) In a recent interview with CNN, the Director of the Secret Service noted that his organization is increasingly focusing on the cyber security of the physical facilities visited by the President of the United States as part of its duty to protect him. This raises the fascinating question of just how much cybersecurity will become part of the physical security conversation in 2017. As I wrote in 2015, the landscape of cyberwarfare is rapidly changing, with a growing emphasis on the targeting and disruption of physical civilian critical infrastructure like the power grid. The nation of Ukraine has already experienced firsthand the results of cyber-induced blackouts, proving these approaches have left the realm of speculation and are now entering the wild. To date most of these attacks have focused on national infrastructure as part of larger simmering conflicts and their use in surgical targeting of particular high-ranking individuals has been more limited. Yet, it is only a matter of time before we see such applications, as the Secret Service director’s comments reflect. Imagine a major head of state on an official visit to a foreign country or even a visit by the President of the United States to another part of the US. Security forces go to great lengths to construct a physical security cordon and maintain exclusive control over who is able to enter that controlled space. Yet, the growing Internet of Things means that more and more the various objects in that controlled space, from the light bulbs overhead to the elevators to the fire alarms to the traffic cameras are all remotely accessible. Imagine a foreign intelligence service that wanted to disrupt and embarrass a foreign head of state visiting another country. Today they might hack into the local police offices in the city being visited and monitor email accounts and document archives to locate official security plans and schedules for the visit to plant paid protesters holding large signs along the motorcade route. But, take this a step further and consider for a moment the new factor of the vast Internet of Things that envelopes that visit. Those hackers could monitor all of the traffic cameras in the area to watch the head of state’s movements in realtime and monitor his or her schedule second by second. As he enters a building, the local CCTV cameras throughout that building could be used to surveil his movements and compile an intelligence list of everyone he meets with. Yet, here’s where things get far more worrying. When he steps on the elevator to change floors, those hackers could disable the elevator system and trap him, disrupting his visit and generating media images of him being helplessly dragged up a ladder to safety. Or they could trigger the fire alarms or overheat a piece of equipment to cause a real fire and activate the sprinkler system, leading to images of a soaked and miserable leader cutting his visit short. Given that most modern office buildings have switched to electronic access controls, those hackers could simply deactivate all locks across the building, instantly rendering the entire facility unsecured, doors flapping in the breeze and causing mass panic among his guards. Or, they could move to paralyze the entire city, cutting power to every major building, while activating fire alarms across the city and manipulating traffic signals to cause massive traffic accidents and trap first responders helplessly across the city and preventing him from reaching his next appointments. Instead of a head of state, one could imagine the ultimate jewelry theft in which a thief uses building CCTV cameras to monitor the building for the right moment and then turns off the cameras, disables all locks across the building, disables the emergency generator and then cuts power to the entire building, plunging it into darkness. Once the theft is complete, the remote hacker could even trigger the fire alarm and sprinkler system, causing the building to empty into the streets and allowing the thief to simply blend in with the rest of the occupants evacuating into the nighttime streets below. Putting this all together, there is thankfully no record to date of a cyberattack targeting the physical infrastructure as part of an attack on a head of state or a sophisticated jewelry caper, but as the physical world increasingly becomes just a bunch of internet connected devices, we must start contemplating a future in which physical security becomes one with cyber security.
c212ba1da05c3299faa06a38fd6d8002
https://www.forbes.com/sites/kalevleetaru/2017/02/02/lies-damned-lies-and-statistics-how-bad-statistics-are-feeding-fake-news/
Lies, Damned Lies And Statistics: How Bad Statistics Are Feeding Fake News
Lies, Damned Lies And Statistics: How Bad Statistics Are Feeding Fake News A display of financial data. (Martin Leissl/Bloomberg) As Mark Twain famously popularized in the public consciousness, “There are three kinds of lies: lies, damned lies, and statistics.” Whether through malice, poor training or simple ignorance, “bad statistics” has a rich and storied legacy stretching back as long as humans have been counting things. Countless books, papers and blogs chronicle the myriad ways in which data and statistics are abused to lend false support to arguments in fields ranging from the academic world to public policy. As data-driven journalism is on the rise and calls continue to grow for increased evidence-based “fact checking,” it is worth stepping back to ask how much of the “fake news” that surrounds us today is based at least in part on bad statistics. Not a day goes by without a flurry of data-driven memes passing through my Facebook news feed, sailing by on my Twitter stream or landing as alerts in my email inbox that cite what appear to be reputable datasets and using them to offer surprising conclusions, typically wrapped up in a mesmerizing infographic. Yet, when I pick any of these memes at random and delve into it, I find that it is the rare meme indeed that stands up to statistical scrutiny. Some memes I come across are easy to discard as outright fabrications, citing non-existent datasets, non-existent authors, non-existent journals or citing real (typically very prominent) researchers and institutes in the field, but who when contacted say they’ve never heard of the research they are claimed to be the author of. Textual memes are the most common in this category, since it requires so little effort to send out a tweet along the lines of “A recent Gallup poll states that 80% of Americans believe that climate change is false.” Such memes can be made to look more authoritative by whipping up a quick graph in Excel. For such visual memes, sometimes just right-clicking on the graph in the Google Chrome browser and selecting “Search Google for image” will turn up fact checking sites or academic blogs who have researched the graph and confirmed it to be a fabrication. I’ve even seen a few memes that have taken a legitimate “science-y looking” graph from a paper in one field and use it as an illustration for a claim in a different field. Just recently I saw a meme go by in my Facebook feed that featured a graph of an exponential curve with all sorts of statistical measures in the background that was used to illustrate a claim about global warming trends over the past 50 years. The odd part is that X and Y axes were cut off and some of the annotations on the graph related to the medical field. In fact, after a bit of searching I was able to find that the author of the meme had apparently just grabbed a nice exponential-looking graph from a completely unrelated medical paper (perhaps found via a quick Google Scholar search). The rise of preprints, postprints and academic publishing through blogs has had a dangerous effect on scientific trust, accustoming the general public to seeing a news article discussing a new scientific advance that links to a preprint of the article on the faculty member’s personal blog, rather than on the journal’s website. This means that when a member of the public sees a meme that cites an academic paper supposedly published in the latest issue of Nature, but the link goes to a PDF on a random website that purports to be a Harvard professor’s personal blog, many readers won’t blink an eye and simply trust that the paper really is a preprint of a new Nature article by a Harvard professor. Muddying the waters even further, the rise of predatory publishers and fly-by-night journals means that a meme could link to a paper on an actual professional-looking journal website with a prestigious-sounding name and listing many prominent faculty on its editorial board (who may not even be aware their names are being used). Peer review standards are often essentially non-existent at such journals, meaning nearly any submission is accepted. It thus takes little more than a quick Google search these days to locate an academic paper published in a prestigious-sounding journal that makes any argument you want and claims to have the data, statistics and citations to support that argument rigorously and without question. To the average member of the public, "peer review" is an unknown concept and a paper published in Nature is not any more reputable than one published in The Journal Of Prestigious And World Changing Research. However, the greatest single contributor to data-driven “fake news” are the myriad statistical fallacies that so easily befall even academics in fields that do not emphasize rigorous statistical training (though even stats-heavy fields are not immune to statistical arguments). Beyond the obvious candidates like suggestions of correlation implying causation and improper use of statistical techniques, perhaps one of the greatest enablers of fake news in the memes I come across is sampling bias and selective definitions. As but one example, definitions of what precisely constitutes a “terror attack” are notoriously controversial. Was something a “mass shooting,” a “terrorist attack,” or an “act of mental illness?” I recently saw one meme that argued that there had never been another act of terrorism on US soil since 9/11 because all subsequent US attacks were the result of mentally ill individuals, rather than terrorism. Another recent meme I saw claimed that no American had been injured or killed by a foreign-born attacker on US soil and only in tiny print in a small footnote was there a statement limiting the time frame of analysis so as to not include the 9/11 attacks, the San Bernardino attack and other cases. One national poll I saw during the presidential campaign season made bold claims about national support for Clinton, but in its methodology revealed that more than 80% of its sample size were Democrats and Independents. This raises the critical question – would we label these as “fake news,” as “factually accurate but misleading” or as “absolutely true?” Therein lies one of the great challenges of the “fake news” debate – many of the data-driven memes (and news articles) swirling about are, on purely technical merits, factually accurate based on the carefully-constructed population sample they use. The question is whether something that is factually accurate can also be labeled as “fake news” when it comes to misleading the public, given that the results of even the best-run experiments are all too quickly separated from the myriad caveats that temper those conclusions. A surprising poll that clearly indicates an overwhelming sampling bias towards Democrats is eventually transformed into a headline devoid of any mention of partisan skew. A claim that there has never been a terror attack on US soil since 9/11 spreads through social media and sheds its footnote clarifying that it refers to only a small portion of that 15-year period. How do we handle statistical fallacies in a world in which few citizens (and even academics) have even a basic understanding of statistics or data? Even more troubling, how do we handle factually true statements that utilize such a carefully constructed population sample that their argument is almost meaningless? They can’t technically be flagged as “fake news” since they are factually correct, but it is also likely that as they spread those footnotes will be lost. If a factoid is shared without its original caveats does that then make it false? If a meme simply states “There has never been a terror attack on US soil since 9/11” and the footnotes clarifying the time periods and definition of “terror attack” it refers to have long since been lost, does that make the meme false or is the meme still true since it is factually correct under the specific assumptions and population construction used by its original author? These are fascinating questions as we confront the duality of vastly increased access to data and a data-illiterate population that lacks the statistical training to understand how to properly use that data to draw conclusions. Adding to this volatile mix, social media ensures that even the most skewed factoid can be extracted from a dataset and go viral, quickly losing connection to the myriad definitional caveats that enabled it to cling to truthfulness. Even when using simple techniques like counts over time, issues like data normalization and the unique nuances of dataset construction are particularly perplexing even to those with deep statistical backgrounds, meaning that even seasoned data journalists regularly publish findings that are deeply flawed and lead to further false and misleading headlines and interpretations. Putting this all together, as I argued in December, we cannot begin to fight fake news until we focus on increasing society’s data and information literacy.
a5b9f8cf94f5feb287d4660a1e0b033b
https://www.forbes.com/sites/kalevleetaru/2017/02/10/what-wikipedias-daily-mail-ban-tells-us-about-the-future-of-online-censorship/
What Wikipedia's Daily Mail 'Ban' Tells Us About The Future Of Online Censorship
What Wikipedia's Daily Mail 'Ban' Tells Us About The Future Of Online Censorship Wikipedia's darkened front page protesting US anti-piracy laws in 2012. (PHILIPPE LOPEZ/AFP/Getty... [+] Images) Earlier this week The Guardian reported that Wikipedia editors had voted to “ban” the Daily Mail as a source for the online encyclopedia in all but exceptional circumstances and that the majority of current links to the news outlet would be replaced by links to other outlets. How was this decision made, what kind of data fed into this decision-making process and what does it tell us about the future of censorship and who decides what is “real” on the Internet, especially as social media platforms increasingly play the role of global censor? When I reached out to the Wikimedia Foundation for comment, they emphasized right at the beginning of their email that they did not agree with the Guardian’s use of the word “ban” to refer to the action, that instead links were merely “generally prohibited” except in rare circumstances. They repeated this several times in their correspondence, each time emphasizing that it was a “prohibition” instead of a “ban.” However, at the end of the day, if you tell someone they are “prohibited” from linking to something and that if they do, you’ll likely go back and delete that link, that is a “ban” or a “blacklist” in any other word. The Foundation’s initial response also emphasized that this was a decision exclusively made by “volunteer editors around the world” and pointed to the official Request for Comment (RFC) page that was used to collect recommendations and to make the final decision prohibiting the Mail as a source. Yet, the very first comment on that page, posted by the user who initiated the entire process, claims that Wikipedia Founder Jimmy Wales himself supported the argument that the Daily Mail is an unreliable source and specifically cites Wales’ status as the founder of Wikipedia in arguing for the linking ban. When I pointed out this out to the Wikimedia Foundation, that despite their claims that this was an entirely organic and democratic process with no influence from the Foundation, that the site’s founder was so prominently cited as supporting the argument behind the ban, they backtracked slightly and acknowledged the role and influence their founder’s views would have on such a conversation, stating “Jimmy’s voice carries weight within the Wikimedia community,” but argued that “he is one of many voices that go into how decisions are made on the site.” Indeed, Wales has not always seen eye to eye with the other editors and administrators of his creation. In 2010 these tensions burst into the public eye when he used his special administrator privileges to unilaterally delete dozens of images from Wikipedia’s Wikimedia Commons image archive that had been identified as potentially qualifying as child pornography, along with thousands of other pornographic images accessible to children. This prompted a fierce and at times vitriolic backlash from Wikipedia’s other editors and administrators, which ultimately forced Wales to forfeit his special extended administrator privileges that had enabled him to override their democratic consensus to keep the images. This incident bears especial relevance to the Daily Mail blacklist in that this was also a case where the broader Wikipedia editorial community had reached a defacto consensus that it was acceptable for Wikipedia to host and display these sexualized images, including those depicting children. Despite this also being a democratic decision, Wales overrode the status quo of the site's editors and took action to remove the content. In essence, he stood as a special extrajudicial check and balance to the danger of groupthink among the relatively homogenous group of Wikipedia editors. Reading through the more than 405 arguments (and several hundred additional statements) for the revocation of Wales’ special administrator privileges in the aftermath of his 2010 actions, a common sentiment that emerges is that the editors themselves must reign supreme in the content of the site and that “community consensus” is meant to refer to the small elite cadre of Wikipedia editors, rather than the views of the Internet public and Wikipedia’s own users at large. Indeed, even statements against the revocation refer to the will of the small editorial community, rather than the Wikipedia’s incredibly diverse user community and not once is there a single mention on the page of holding an open poll linked from Wikipedia’s homepage and article pages for the entirety of all Wikipedia users to weigh in. The Wikimedia Foundation did not respond to a request for comment on this case or how the broader Wikipedia user community could register concerns and come together to override Wikipedia editors in similar cases today. In particular, if the broader Wikipedia user community determined that the Daily Mail should be restored as an allowable source on the platform, how would they go about having their voices heard and override this decision? It is especially noteworthy that in the present case, a decision as important as “generally prohibiting” links to an entire news source was made by a small elite group of editors that do not necessarily resemble the average user of the site. Most notably, as much as 80 to 90% of Wikipedia’s editors are male and 94% of the most active editors are believed to be men, while there is little geographic diversity, including little representation from Africa or Central Asia. When I asked the Wikimedia Foundation how it ensures that the decisions of its editors reflect broader societal consensus, rather than an insular elite group of largely Western men, the Foundation did not directly respond, instead arguing that such decisions are democratic since they reflect the views of multiple editors, while avoiding the question of whether those views reflect the broader Wikipedia user community. Given the immense gender disparity among Wikipedia’s editors, I asked the Foundation what would stop those same editors from determining that feminist or female-oriented publications are similarly “unreliable” and enacting a similar prohibition on linking to any source determined to advocate for women’s rights or which supports feminist culture. While providing responses to my other questions, the Foundation notably did not respond to this query or a follow up to it. When asked why Wikipedia had focused so much effort on the Daily Mail, while countless other state-owned media outlets controlled and operated by repressive governments and which have been widely criticized for false reporting by the broader journalism community, are still permitted as sources with no restrictions of any kind, the Foundation responded only that a single editor had initiated the process to prohibit the Mail and that any editor could similarly initiate action to prohibit other sites, but that it would not comment further. This raises the question of just how precisely the Wikipedia editorial community came to the consensus view that the Daily Mail was not reliable enough of an outlet to be permitted as a general source for the encyclopedia. The Foundation responded that the entirety of the decision-making process was an internal editorial poll in which 75 Wikipedia editors participated, 50 of whom voted to blacklist the outlet and five administrators who collated the poll results and made the final decision to enact the ban. Out of the billions of Internet users who come into contact with Wikipedia content in some way shape or form, just 50 people voted to ban an entire news outlet from the platform. No public poll was taken, no public notice was granted, no communications of any kind were made to the outside world until everything was said and done and action was taken. This intensely insular nature of Wikipedia’s decision making process was reflected in how unprepared the Wikimedia Foundation was to the intense public discussion the action launched – responding that it had been completely inundated with media requests to the point of being unable to respond in a timely fashion and that it had not prepared any formal statements or FAQs on the matter. Ironically, Jimmy Wales himself wrote an opinion piece for The Guardian just last week in which he decried the notion of a small group of editors at Facebook and Twitter deciding what is reliable or not: “none of us is comfortable with the social media giants deciding what’s valid or not.” When I asked the Foundation for comment on this, they responded that the Foundation saw a difference between a small group of editors at a commercial company making a decision on behalf of its users and a small group of editors at a non-profit making decisions for its users. When pressed on just how they saw the two cases differing, the Foundation demurred, noting instead that users who objected to a policy could use the “Talk” page for an article to register their concerns, while sidestepping the fact that the average user of Wikipedia likely does not even know what a “Talk” page is, let alone spend their time wading through all of its commentary after reading an article. What then was the incontrovertible evidence that those 50 Wikipedia editors found so convincing as to apply a “general prohibition” on links to the Daily Mail?  Strangely, a review of the comments advocating for a prohibition of the Mail yields not a single data-driven analysis performed in the course of this discussion. In fact, the “fact checking” stage of the prohibition is perhaps best summed up by the user who proposed the prohibition in the first place: “A list of reasons why would be enormous, it doesn't need reiterating, the paper is trash, pure and simple.” Some comments point to specific cases where a Mail article was later corrected or retracted, while others note that all news outlets have experienced cases where they’ve had to retract articles. For a decision as weighty as placing a general prohibition on linking to an entire news outlet in Wikipedia, one might have expected to see at the very least a rough data analysis where someone had compiled a spreadsheet of a randomized sample of articles from the Mail homepage each day for a month and manually reviewed each to determine whether it was later corrected, retracted or rebutted by other press outlets. One might also have expected to see a similar control study performed for at least one another major British outlet such as The Guardian to create a comparison sample to see whether the Mail’s numbers were out of the ordinary for the British press. Instead, not one single data-driven study was cited in the decision to place a prohibition on linking to the Mail. To put this into context - the absolute entirety of the body of evidence used to place a blanket prohibition on the Mail was that out of the billions of Internet users that come into contact with the platform’s content, 50 people said anecdotally that they disliked the newspaper for unspecified reasons. We have endless arguments about Facebook and Twitter’s control over what we see online, but at the end of the day if just 50 people can make a decision on behalf of all Wikipedia users worldwide based purely on their personal beliefs without a single piece of hard data supporting that decision, how are we to ever again criticize how social media companies make their decisions regarding what is permitted on their platforms? Putting this all together, the Internet was supposed to bring the world together and give every citizen of the earth a voice – instead the same voices as before have simply become ever louder.
eef95468177c1164ca659ec9f4bf5698
https://www.forbes.com/sites/kalevleetaru/2017/02/17/did-facebooks-mark-zuckerberg-coin-the-phrase-fake-news/
Did Facebook's Mark Zuckerberg Coin The Phrase 'Fake News'?
Did Facebook's Mark Zuckerberg Coin The Phrase 'Fake News'? Facebook founder and CEO Mark Zuckerberg. (Credit: Pablo Porciuncula/AFP/Getty Images) Over the last few months the phrase “fake news” has rocketed into the global lexicon and even earned the title “word of the year” for 2016. This raises the question of who coined this iconic phrase, what its origins are and how it achieved such superstardom. While lavishing more than 6.5 million news articles on the topic over the last few months according to Google News, few in the media world have stepped back to ask where exactly this meme-ready term came from, other than comments like “the term hasn’t been around long” or that Google Trends data shows it entered the public lexicon sometime last October. The concept of false and misleading news coverage is as old as news itself and over the 20th Century became deeply enmeshed in the formalizing world of wartime propaganda. If we look at the Google NGrams Viewer, which tracks the popularity of words and phrases in books published over the last 200 years, we see that the literal phrase “fake news” took off at the start of World War I and reached its peak in the leadup to World War II, likely reflecting the rise of propaganda research and the impact that false information could have on societies. Popularity of the phrase “fake news” in books published over the last 200 years as captured in the... [+] Google Books NGram Viewer The term’s coincidence with popular and scholarly interest in wartime propaganda suggests that in this early incarnation, the phrase “fake news” was seen as a reference to deliberately falsified or misleading information used to attempt to manipulate the beliefs, emotions and views of the general public. This leaves unanswered how the term entered the modern popular consciousness. To explore these kinds of questions, this past December in collaboration with the Internet Archive’s Television News Archive I launched a new research tool called the GDELT Television Explorer that allows you to trace how a word or phrase has been used across nearly 2 million hours of American television news programming from January 2009 to present totaling more than 5.7 billion words of closed captioning from more than 150 distinct television stations across the nation. Using this tool we can instantly trace precisely when and how a given phrase entered the television news lexicon and the contexts in which it has appeared over the past 8 years. Using the Television Explorer, we see that the phrase “fake news” was nearly nonexistent on the national television networks Bloomberg, CNBC, CNN, FOX Business, FOX News and MSNBC from 2009 through Fall 2016 (though not all networks were monitored for the entire time period). In fact, other than a few brief spurts over that period, the phrase appears to have literally sprung into existence and become a viral sensation nearly overnight sometime in Fall 2016. Popularity of the phrase “fake news” on national American television news networks 2009-present as... [+] monitored by the Internet Archive’s Television News Archive and analyzed by the GDELT Television Explorer If we zoom into the far right of this timeline we see that the term appears to have burst into popularity November 11, 2016, as seen in the timeline below. Popularity of the phrase “fake news” on national American television news networks November... [+] 2016-present as monitored by the Internet Archive’s Television News Archive and analyzed by the GDELT Television Explorer That date is significant because it is the day after Facebook CEO Mark Zuckerberg famously proclaimed at the Techonomy conference that “Personally I think the idea that fake news on Facebook, of which it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea … I do think there is a certain profound lack of empathy in asserting that the only reason someone could have voted the way they did is they saw some fake news.” The founder of the world's most popular social network pushing back on the assertion that false information on his platform played any meaningful role in the presidential election captivated the press and media scholars. Mentions of “fake news” on the major networks centered around Zuckerberg’s comments for the coming days, but died off until “Pizzagate” in which a shooter, motivated by reading false reports of a child sex ring at DC pizza restaurant Comet Ping Pong, arrived with an assault rifle to investigate the matter himself. This event suddenly made “fake news” real for the American public, and transformed it from a vague election conspiracy theory into a very real physical threat. Once again the phrase fell out of interest until President Trump’s press conference on January 11, 2017 in which he labeled CNN “fake news.” With this one statement the President-elect completed the transition of the phrase “fake news” from Zuckerberg’s focus on maliciously and willfully false clickbait into a pejorative catch-all for journalistic error. Putting this all together, we find that while the phrase “fake news” appears to have been born and initially popularized in the leadups to World Wars I and II, and it could be that Facebook founder Mark Zuckerberg entered it into our modern public lexicon. While last century “fake news” meant wartime propaganda designed to sway public opinion and demoralize enemy populations, Zuckerberg invoked the phrase to refer to this century’s malicious clickbait content similarly designed to mislead and sway public opinion or simply generate ad revenue. Pizzagate cemented the phrase in the public consciousness while President-elect Trump helped weaponize it into an attack on the journalism world. Thus it is that while born in the world of propaganda and modernized to refer to maliciously false stories spread by social media, the term “fake news” has evolved through the help of the new presidential administration into an attack on journalism itself. Through the power of 5.7 billion words of television closed captioning we are able to visualize the real-time evolution of language at work.
37e1a4542bacfb60677240b6ba5ee626
https://www.forbes.com/sites/kalevleetaru/2017/12/29/facebooks-deletion-of-ramzan-kadyrov-and-who-controls-the-web/
Facebook's Deletion of Ramzan Kadyrov And Who Controls The Web?
Facebook's Deletion of Ramzan Kadyrov And Who Controls The Web? Facebook logo superimposed on a laptop at its London headquarters. (DANIEL LEAL-OLIVAS/AFP/Getty... [+] Images) Last week Facebook quietly deleted the accounts of Chechen Republic leader Ramzan Kadyrov across its platforms, in the process both cutting him off from his 4 million followers and instantly removing all of his posts he’d made over time, effectively eliminating any trace of him ever existing in its walled garden. What do Facebook’s actions tell us about the future of our rights in the online world? Facebook’s official justification for its sudden removal of Mr. Kadyrov’s accounts was that he had been added to the US Treasury’s Office of Foreign Assets Control sanctions list and that “Facebook has a legal obligation to disable these accounts” for any individuals on the sanctions list. Yet, as the Times pointed out, many other individuals on the sanctions list remain in good standing on Facebook, raising the question of why, if Facebook believes it would be in legal jeopardy for not deleting the accounts of individuals on the list, it does not remove them all. In Kadyrov’s case in particular, the sudden deletion of his accounts plays perfectly into the Russian narrative of a hypocrisy of US attitudes towards freedom of information, allowing him to respond “where is your praised democracy and the right of citizens to receive information?” Perhaps most importantly, as a head of state, the deletion of Kadyrov’s accounts marked not the deletion of an ordinary citizen’s account, but the deletion of the official correspondence of a foreign head of state, illustrating that as countries increasingly turn to platforms owned by US companies, they are handing over control of their sovereign voice to the US. A handful of private citizens in the US now have absolute control over what every government on earth can say to their citizens on the walled gardens that are increasingly “the web” and with a click of a mouse can delete the entire history of communications by a head of state, eliminating any trace they ever existed there. The ability to delete an individual’s entire online existence from a major social media platform offers a stark reminder that what was once a global cooperative of interconnected computers spread across the world has devolved into a small set of walled gardens controlled by just a few US companies governed by US law. Indeed, the consolidation of control over the global web into the hands of a few people raises the question of whether we should even continue to call the internet the “web” given that as these social networks continue to grow, we spend our time increasingly in their confines, not the interconnection of independent websites we once freely roamed. When China blocks content about Tiananmen Square, it can enforce that censorship only within its own borders – a website in the US is free to publish as much as it likes about the 1989 event. As Facebook becomes the Internet itself, a single company now controls what the entire world can say on its platform and US law regulates all online content that traverses its electronic borders. While few might mourn Mr. Kadyrov’s deleted accounts, their removal raises the question of just how much control the US Government now wields over the online existence of all that oppose it. Could the White House simply “delete” any individual or organization it disagrees with? In fact, I raised this very scenario last year, asking whether the US Government could simply make Julian Assange and Wikileaks vanish by ordering Facebook to delete their accounts. Similarly, could it silence human rights groups like Amnesty International in retaliation for their stances towards the actions of the US or its allies or eliminate all online criticism of its Jerusalem move simply by adding every offending individual or organization to some form of sanctions list? In effect, if Facebook believes it faces an absolute legal requirement to delete the accounts of every individual on certain US sanctions lists, could the US expand its sanctions programs in such a way that they could silence any person or organization on earth with a single stroke of the pen adding their name to a blacklist? The company did not respond to a request for comment, including why it had only banned some of the people on the sanctions list and not others and whether, if the US Government were to find a way to add Wikileaks or other organizations and individuals that have spoken against US policy to sanctions lists, would it immediately delete them from its platform as well? Putting this all together, Facebook’s sudden deletion of Ramzan Kadyrov’s offers a powerful reminder of just how centralized the internet has become and that as their walled gardens become the web itself, a handful of US citizens now wield absolute control over the digital voices of a quarter of the earth’s population and they alone decide which of the world’s head of states can speak and what they can say.
51a0357662240cf68b3626b0e6461e95
https://www.forbes.com/sites/kalevleetaru/2018/01/12/is-twitter-really-censoring-free-speech/?sh=47af99d265f5
Is Twitter Really Censoring Free Speech?
Is Twitter Really Censoring Free Speech? The Twitter logo. (Leon Neal/Getty Images) The past month has seen a flurry of high profile announcements chronicling just how all-powerful social media companies have become in their control over what we see online. From Twitter’s nonchalant reminder of its ability to ban world leaders and their posts, to Facebook’s actual deletion of a head of state, Silicon Valley has been on the move to remind the world that it and it alone decides what we are permitted to see in its walled gardens that define our modern web. As we take stock of a new year, what does 2017 teach us about what to expect in the coming year? The power of social media companies to determine what crosses their digital borders has been in the headlines this week with two major stories: Facebook’s redesigned News Feed that deemphasizes commercial and news content in favor of friends and family and a set of undercover videos by Project Veritas that claim to show several Twitter employees openly discussing how the platform limits or deletes posts or entire user accounts. Project Veritas is known for selectively editing its videos and the manner in which the most recent videos were filmed and edited makes it difficult to fully assess their contents and the veracity of the claims they appear to make. The broader question, however, is why such films received the attention they did in 2018. The short answer is that they address a topic that the social media platforms themselves have been immensely reluctant to discuss publicly: how they make the myriad decisions each day of who and what to delete or restrict on their platforms. When asked for comment on several specific claims made in the videos, a Twitter spokesperson issued a statement saying “The individuals depicted in this video were speaking in a personal capacity and do not represent or speak for Twitter” and that “Twitter is committed to enforcing our rules without bias and empowering every voice on our platform, in accordance with the Twitter Rules.”  However, when asked about the specific claims made in the video regarding content moderation, Twitter would provide comment only regarding “shadowbanning,” saying “Twitter does not shadowban accounts. We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still to click through and see these Tweets if they so choose” and referred to its Help Center regarding “Limiting Tweet visibility." When asked whether the company unilaterally denied the allegations of “unwritten rules” and political bias in its content review teams that determine what content is considered a violation of its rules, the company responded to several other questions, but did not issue a denial or any other comment regarding the bias statements beyond denying the existence of “shadowbanning,” nor provide further comment regarding the question of bias or reviewer composition. Unfortunately, this appears to be the standard practice today of Silicon Valley companies when confronted with questions of how they decide what is permissible speech online: simply remain silent and wait for the story to pass, rather than take the opportunity to provide their users with more detail about how the online world they call home works. In particular, when confronted with questions of bias, whether relating to political affiliation in the US or government pressure internationally, companies have remained largely silent, refusing to provide any significant detail as to their moderation policies. Why is it that in 2018 the platforms we use to communicate with each other operate as opaque black boxes into which we have absolutely no insight or voice and simply accept that a handful of people in Silicon Valley will decide what a third of the earth’s population have the right to talk about? When asked whether Twitter would consider releasing its full set of guides, manuals, documentation, tutorials, training materials and all other materials given to its reviewers or a justification for why it believes this material cannot be released, the company did not respond. In the past companies have claimed that releasing such material would enable bad actors to know just what they can get away with, but such arguments hold little merit in that every day bad actors post material looking to see just how close to the line they can tread without consequence. Setting aside the specifics of its training manuals, it is also noteworthy that the companies have similarly steadfastly refused to provide aggregate demographics regarding their content review staff. Releasing basic top-level statistics as to gender, race, languages spoken, countries they hail from, self-identified political, social, religious and other affiliations and other demographics would go a long way towards refuting the bias claims that regularly surface regarding the companies’ review staff. Understanding the composition of the review staff used by major social media platforms would help shed light on the languages and cultures that might be underrepresented and the kinds of hidden biases that can lurk unnoticed. After all, as I showed in 2016, Facebook’s News Feed was indeed extraordinarily biased, but in a way that others weren’t talking about: geographically. Twitter also declined to respond when asked whether the company would be open to convening an external panel of academics and other experts from outside the company, providing them a large dataset of tweets and accounts it has limited, deleted or otherwise taken action on, and allowing them to produce a summary report for public distribution that would assess Twitter's accuracy and biases. In the end, even if the company felt that releasing any information regarding the aggregate demographics of its reviewers or any detail of its review process would harm its operational security, it is unclear why the company will not commit to allowing an independent external panel to assess its work. After all, if a blue ribbon panel of top scholars and data scientists from across the world were granted unrestricted access to its review materials and the actual records of what it has and has not taken action on to analyze them, it would go a long ways towards either confirming or finally refuting once and for all the myriad questions of bias that naturally occur when companies operate in strict secrecy. Even simple questions like the percent of Twitter's accounts that are bots and how much of its content is automatically generated are complete unknowns. External groups routinely provide their own assessments, but beyond vague statements, the company has to date declined to provide firm hard numbers on just how much of its content and viewership is made of carbon rather than silicon. When asked whether the company would permit external auditing of its numbers, a spokesperson said they had nothing to add beyond the company’s previous statements. Twitter’s opaque moderation policies also make it more difficult for the company to fight “fake news” and misinformation campaigns that leverage the anonymity of their platform. Parody accounts look and act very similar to the real accounts they satirize, but if they look too close to the real thing and aren’t clear enough about their satirical role, it can be difficult for users to tell the difference. For example, in the leadup to last year’s election, the Russian government is believed to have operated a troll account designed to look like the Tennessee State GOP. The Twitter account used the State Seal as its logo, “@TEN_GOP” as its handle and “Tennessee GOP” as its title, with its bio saying “I love God, I Love my Country.” Only later did it change its bio to “Unofficial Twitter of Tennessee Republications,” which could still easily leave unsuspecting users thinking it was operated in some fashion by or with the knowledge of the state GOP. Despite 11 months of the real Tennessee GOP formally complaining to Twitter about the impersonating account, the company took no action, removing the account only after the company faced substantial scrutiny for the role its platform played in Russian influence campaigns. When asked for comment a spokesperson referred to its impersonation policy, which requires that such accounts “clearly stat[e] it is not affiliated with or connected to any similarly-named individuals.” In this case, the @TEN_GOP account did not include such language and in fact could easily have been confused for the real account, but Twitter did not respond to a further request for comment, including the use of the Tennessee State Seal as the account’s avatar. This of course played out again last year with the rise of the “rogue” US Government agency accounts, from NASA to the National Park Service. As the accounts began attacking each other and even launching fundraisers, there was little for the average person to be able to know just which accounts, if any, were actually run by current or former US Government employees from the agencies they claimed to support. Moreover, this lack of transparency can have very real consequences. When the official Twitter account of the President of the United States was briefly deactivated last fall, Twitter released precious little detail about how a single contractor could allegedly shut down the Twitter account of a head of state. In response, a Twitter spokesperson offered by email only “We won't have a comment on a former employee. We have taken a number of steps to keep an incident like this from happening again. In order to protect our internal security measures we don't have further details to share at this time.” Yet, once again we are left in the dark as to how much has really changed. It is a remarkable turn of events that the company that once congratulated itself as “the free speech wing of the free speech party and famously informed Congress it would not stop alleged terrorists from leveraging its services has evolved to slowly and steadily distance itself from its free speech ethos. With each update of its terms of service, the company has moved a bit further towards prioritizing commercial reality over the anything-goes mentality upon which it was founded. However, the ideal of unfettered free speech still factors prominently into the company’s public ethos. In an interview last year, the company’s Head of Global Public Policy Communications offered “I am passionate about protecting and empowering all people to freely express themselves, even when that can lead to challenging conversations, because ultimately it is only through connecting with others that we can learn, grow, and evolve. Twitter has revolutionized the way people communicate and learn from one another, and I am proud to represent a company that both has such a strong commitment to free speech and approaches this issue with such care and thoughtfulness.” Putting this all together, is Twitter actively censoring certain political views or are Project Veritas’ videos the result of selective editing and employee bravado? We will simply never know. Yet, the fact that we will never know is the problem. Whether it is claims of conservatives being censored on Twitter, allegations of government influence on Facebook, or very real geographic biases, the platforms we communicate through are no longer neutral. Here in the United States, your cellular provider doesn’t actively monitor your phone calls and mute out the topics they don’t like. The USPS doesn’t inspect every letter for adverse political views. Your ISP doesn’t block access to sensitive topics like Tiananmen Square. In other countries such censorship is a routine and accepted part of daily life, but here in the US we have grown accustomed to neutral communications mediums that connect us to others without actively moderating what we are permitted to say. In their place, the online world that was supposed to bring us together and tear down the last bastions of censorship has instead created the greatest censorship and surveillance infrastructure the world could ever imagine.
6d8e85404b0b9da330a618b2a47440f4
https://www.forbes.com/sites/kalevleetaru/2018/03/21/from-mastermind-to-misuse-in-four-years-who-owns-our-data/
From MasterMind To Misuse In Four Years: Who Owns Our Data?
From MasterMind To Misuse In Four Years: Who Owns Our Data? Facebook CEO Mark Zuckerberg. (AP Photo/Marcio Jose Sanchez, File) One of the most intriguing elements of the Cambridge Analytica story is just how much public sentiment has turned against social media and especially social media analytics in just four years and just who "owns" our digital personas. From breathless headlines just a few years ago of “masterminds,” “dream team” and “groundbreaking,” social media data miners are now described using adjectives like “exploited,” “misuse” and “surveillance.” Does this represent the beginning of something bigger, perhaps the end of social media analytics? Unlikely. In all of the media coverage, policymaker statements and public conversation of the Cambridge Analytica situation of the past few days, there seem to be four major allegations of concern emerging: 1) millions of users had their information harvested who did not consent to it being collected, 2) an academic transferred data collected for academic use over to a commercial enterprise in violation of Facebook’s rules, 3) the company did not delete the data when asked to do so by Facebook and 4) that profiles compiled from that data were so powerful that they changed the outcome of the election of the most powerful person on earth. As I detailed on Monday, it was actually the Obama campaign that pioneered the mass harvesting of data from unwitting users on Facebook at a national scale. As the Post summarizes, the campaign “built a database of every American voter using the same Facebook developer tool used by Cambridge, known as the social graph API. Any time people used Facebook’s log-in button to sign on to the campaign’s website, the Obama data scientists were able to access their profile as well as their friends’ information. That allowed them to chart the closeness of people’s relationships and make estimates about which people would be most likely to influence other people in their network to vote.” Or, as one of the campaign’s data leads put it, “We ingested the entire U.S. social graph … We would ask permission to basically scrape your profile, and also scrape your friends, basically anything that was available to scrape. We scraped it all.” This was coupled with a wealth of offline data, including what individuals were watching in the privacy of their own living rooms. Thus, when it comes to the concern that millions or tens of millions of users had their data unwittingly and potentially unwillingly harvested in bulk simply because someone they were connected to on Facebook granted permission, there is nothing here that is any different than 2012 or indeed different from any of the countless companies that relied on this feature to promote their apps. In fact, when Facebook rolled out platform changes in 2014 that removed the ability of apps to harvest friend data, media reports of the time noted the especial impact it would have on campaigns, with Yahoo originally titling one article “Facebook slams the door on political campaigns.” In its announcement of the changes, Facebook itself acknowledged user concerns over how their data was being shared without their knowledge, conceding that “we've heard from people that they're often surprised when a friend shares their information with an app.” In the case of Cambridge Analytica, Facebook claims that the company acquired Facebook data from an academic who had collected it for academic research, while the Obama campaign has been careful to emphasize that the campaign itself collected the data, rather than receiving it from a third party. There is a lot here to unpack and most of it hinges on the specific license text that users agreed to when handing over their data, one’s definition of “academic research” and just what kind of data was handed over (if any) to Cambridge Analytica. Herein lies one of the most frightening aspects of the whole story: Facebook’s suspension of Cambridge Analytica is, according to Facebook, not because of the existence of the dataset of 50 million users or what was done with it, but rather that under its terms of service, the company was required to have collected the data itself, rather than buy it. In short, if you collect it yourself, its fair game – you just can’t pay someone else to acquire the data for you. If Cambridge Analytica had run the original personality app itself, there would have been absolutely nothing Facebook could have done to halt its use and there would have been no story here at all. It is truly astonishing that data on 50 million people could be compiled and the only thing Facebook says is wrong is that they acquired the data from someone else and then didn’t delete it when asked, rather than collecting it themselves. The Obama campaign’s use of a very similar workflow to harvest data on millions of unwitting friends was, according to media reports of the time, both known by Facebook and fully approved by it. What does this mean for the future of analytics? The vast “social media analytics” industry exists solely to mine all of this rich social data to help companies and governments make decisions. Most large social analytics firms offer some kind of Facebook analysis, typically through Facebook’s own anonymized data offering or geotagged user-level feeds like that formerly used by Geofeedia. Facebook itself computes a wealth of indicators about its users and purchases reams of commercially available information to complete its picture of each user. Countless companies offer similar psychological profiling tools of social media users, typically based on Twitter data, but academics have focused extensively on estimating extraordinarily sensitive (and in some countries extremely dangerous) attributes about unwilling and unwitting users on Facebook, while the company itself experiments on its users like lab rats. Moreover, by creative use of ad targeting, very high resolution maps can be constructed of highly sensitive demographics without ever needing to download anything from Facebook or other platforms. By Facebook’s reckoning, the only additional thing Cambridge Analytica did wrong was not deleting the data it had acquired when it promised Facebook it would (the company strongly denies this). This raises the crucial unspoken question that no-one seems to be talking about: what about the myriad other repositories of Facebook data that have been collected by academics, governments and companies over the years, many of which are openly advertised for download? When I asked Facebook for comment on whether it would be reaching out to demand that other large archives of Facebook data delete their holdings (including the remnants of the Obama campaign) or restrict access to them, the company did not respond to comment. (Though shortly ago Zuckerberg alluded that the company would be examining "suspicious" apps that had used its bulk harvesting capabilities, though no mention of bulk scraping or non-API harvesting). Academia in particular is filled to the brim with data collected from Facebook - some collected through formal informed consent, but much of it scraped at will without any notification to the users’ whose information has been archived. To put another way, an incomplete and unknowably large cross-sectional archive of Facebook exists scattered across the file servers, databases, cloud accounts and personal laptops of university researchers all across the world. Zuckerberg makes no mention of all this data in his statement. Indeed, just days before the Cambridge Analytica announcement, researchers on one prominent academic mailing list were lamenting Facebook’s improved privacy settings that limited the amount of data they could harvest at once and discussing workarounds. While Twitter data still reigns supreme as the dominant social dataset for data mining due to its machine friendly API, Facebook data is widely used, especially for more sensitive kinds of demographic and psychological profiling due to its more intimate place in our lives. When users communicate with friends and family on Facebook, they see it as a private living room in which they can have intimate and personal conversations amongst loved ones. The actual reality is that that living room has a one-way mirror to millions of data scientists across the world observing all of that behavior and using it to model and even manipulate us based on activities we never truly understood were being observed to target us. In short, when you stare down at your phone using Facebook, behind your screen are millions of data scientists staring back up at you, clipboards in hand, recording everything you do, building rich psychological profiles of you that you have no right to control or even view and using all of that data to exert invisible control over your life in an infinite number of unseeable ways. Yet, all of this focus on Facebook is misplaced. Our personal, private and sometimes highly sensitive daily life activity is already bought and sold in bulk by a nearly uncountable number of companies and Facebook itself partners with some of the largest to integrate their wealth of holdings on our offline lives that occur beyond Facebook’s reach. Facebook is just one tiny piece of this massive ecosystem of data that commercializes us into products to be bought and sold. After all, you can buy a database of 4.7 million “Suffering Seniors” with medical ailments over the age of 55 for pennies per name or a list of 10 million diabetes sufferers, complete with a free price quote - no need for Facebook. Why is it then that we are so concerned about Facebook data being used for voter targeting when campaigns and companies have long bought a myriad other highly intimate data points about us? This is perhaps the most interesting question and likely revolves around just how little consumers understand about all of the data that is bought and sold about them every day. For all the talk of “#deletefacebook,” withdrawing entirely from the social media world will do nothing to stem the flow of data that is commercialized about each of you each day. Moreover, even those who genuinely want to give up social media entirely will find it difficult at best. Government agencies, local businesses, schools, clubs, public transportation and public safety agencies and nearly every other organization you interact with today likely will at some point tell you to visit their Facebook page for some piece of information. Going dark from social media today is like saying you are going to give up using the web: it’s a feel-good thought to toss around, but the way our modern world is designed, it is becoming more and more difficult to fully free yourself from the tentacles of social media and even those that do are still tracked around the web via embedded tools from the services, while your data is being compiled by data brokers and sold to companies like Facebook even if you never touch a computer or smartphone. Assuming that Facebook data did indeed play some role in Cambridge Analytica’s work for the 2016 campaign, it is important to remember that Ted Cruz’ campaign used the same data and it didn’t end well for him. While I personally have never seen any data or products from Cambridge Analytica, I have been subjected to myriad product pitches from an uncountable number of firms offering the same kind of psychological profiling, influence and targeting products and the results I’ve seen have been nothing short of worthless in terms of indicators and accuracy. Indeed, it is important to remember that Cambridge Analytica is just one small firm in a vast world of companies that offer highly sophisticated user profiling and influence services to companies and governments every day. If we are truly concerned about privacy and ownership of our digital selves, we would do well to widen our conversation far beyond one single company. Of course, Friday was not the first time Cambridge Analytica and its alleged use of Facebook data has been mentioned – those headlines came and went with little fanfare during campaign season. Just four years ago the use of Facebook data for voter modeling and manipulation was heralded as a heroic modernization of the campaign world. Why then are we suddenly talking about all of this in the same language used to describe cybersecurity incidents? Perhaps, as Wired put it, “the data leakage hadn’t helped Unilever sell mayonnaise. It appeared to have helped Donald Trump sell a political vision of division and antipathy.” Could it be that the public is suddenly worried about Facebook’s influence because they see it as the only possible explanation for an election they cannot otherwise understand? Given that our obsession with “fake news” and “filter bubbles” sprung into being as direct outcomes of how the press and public digested the election outcome, that is certainly a possibility and would suggest that the current maelstrom of negative coverage will soon pass with little long term impact. Indeed, as Bloomberg notes, Facebook has weathered countless privacy furors over its tenure and each time the public eventually accepted the new order of things. After all, there is little to this story that Facebook can offer concrete solutions to. The company itself has acknowledged that the initial harvesting of unwitting users’ data from the platform was entirely permissible at the time and that it was well aware of and approved the activity, both for the dataset under discussion and the work of the Obama campaign. Facebook has emphasized that this bulk export capability was disabled years ago and so this will never happen again and thus their defense has been not to worry, this was a one-time issue that can’t be repeated. The ability of a single researcher to box up data on 50 million users and hand it over to a commercial company that allegedly failed to delete it is a harder problem for Facebook to solve. Even at the time the activity allegedly occurred, Facebook already required developers to sign legal agreements prohibiting such actions. The sheer number of developers that build apps that have access to Facebook user data is so vast that the company could not possible audit them all (the company promises to audit “suspicious” apps without giving further detail as to what warrants that label). Moreover, despite the original harvesting mechanism being disabled, researchers, academic and corporate, all across the world are harvesting material from Facebook at this very moment using any number of different tools, making it all but impossible for Facebook to stem the flow of information from its platforms. As I’ve recounted again and again and again, academia is a particular vulnerability for Facebook in that university researchers tend not to understand or adhere to licensing and copyright law when it comes to accessing and sharing data, while the institutions that employ them have largely codified policies that exempt them from the precise kinds of legal agreements that govern the use of Facebook’s data. The increasingly blurred boundaries between academic and commercial research at US institutions means that in future a Cambridge Analytica could simply hire a university to conduct its work using data they had collected for academic research and be entirely within Facebook’s terms of use (since no data is being “transferred”). Moreover, as the academic world is increasingly vocal about demanding access to Facebook data and the company begins to work more closely with academia, it is likely that the number of repositories floating around will only increase, rather than decrease. Putting this all together, the narrative that has emerged is that of an academic researcher who legitimately gathered data on 50 million people with the full knowledge and consent of Facebook, but who the company claims went rogue and handed the data to a commercial firm that didn’t delete it on demand (though the company denies this). Indeed, Zuckerberg’s first statement on the matter largely passes the blame to Cambridge Analytica, though at least acknowledges how many other vast archives of Facebook data may be out there. What’s notably missing, however, is what precisely will count as “suspicious” activity that the company will audit and what precisely it will do about the unknown ocean of data collected and distributed by the academic world – the very narrative that is at the heart of the Cambridge Analytica story. At the end of the day, the public and press focus has cast this as an isolated incident by a few bad actors (though Zuckerberg’s statement acknowledges that there could be many more groups), rather than an indictment of the broader field of social media analytics. Moreover, even the coverage that has acknowledged the broader commercialization of our data has largely focused on Silicon Valley, rather than the vastly larger ecosystem of companies that buy and sell our most intimate data every day, assembling profiles that even Facebook itself purchases to track us offline and ensuring that even those that “#deletefacebook” will still be tracked without mercy. Congratulations Mark Zuckerberg, George Orwell would be proud.
65487b61e3f290188c5d7eaa90c0dd50
https://www.forbes.com/sites/kalevleetaru/2018/04/02/how-data-brokers-and-pharmacies-commercialize-our-medical-data/
How Data Brokers And Pharmacies Commercialize Our Medical Data
How Data Brokers And Pharmacies Commercialize Our Medical Data Shutterstock Lost in all of the Facebook / Cambridge Analytica hyperbole of the past few weeks is any kind of broader privacy conversation about the vast world of data brokers that buy and sell our most intimate information. Even if you’ve never stepped foot in Facebook’s walled garden or know anyone who has, there are myriad companies out there that are buying and selling your information every day that you likely had no idea existed. For example, did you know that if you fill a drug prescription at Walgreens, the company has the right to commercialize your medical information by charging clinical research companies and pharmaceutical manufacturers to send mailers to you on their behalf based on your medical condition? Or that there is an entire industry of companies that sell mailing lists of people with every imaginable disease, all available for pennies per name, monetizing your misery? When you fill a doctor’s prescription at your local pharmacy in the US, you likely believe that medical privacy laws prohibit it from commercializing your medical misfortune. Instead, if you used a Walgreens pharmacy and there is a medical research or pharmaceutical company interested in your condition, you might start receiving mailers asking you to sign up for their clinical trials and it is all completely legal. A Walgreen spokesperson confirmed that the company uses a provision of HIPAA to “work with third-party clinical research companies as well as with pharmaceutical manufacturers on clinical trial studies” to send mailers on their behalf to its customers that have filled prescriptions at its pharmacies. The company emphasized that privacy and protection of customer medical information was paramount and that the third party companies themselves never receive any patient information, they simply pay a fee to Walgreens and ask it to send mailers on their behalf to all of its customers that have filled prescriptions for certain drugs at its stores. Even though the companies don’t receive your name and address from Walgreens, they are able to use Walgreens as essentially a targeted medical advertising platform for clinical trials. When asked whether the company clearly states to each customer that their prescription information can be used by it to send them mailings targeting their disease on behalf of other companies and whether it collects informed consent for the mailings, the company responded that “recruitment letters for certain, approved clinical research studies are allowed under HIPAA provisions. These communications typically don’t require patient authorization” and pointed to HHS’s HIPAA guidance and its own FAQ. When asked why Walgreens does not explicitly inform customers at purchase time that their prescription may be used to target them for medical trials and offer them the ability to opt-out of having their private medical information used in such a manner, the company did not respond, instead offering that once a customer receives a targeting mailing, the letter will include a toll-free number they can use to opt out of future mailings. The company also did not respond to requests for comment regarding the number of mailers it has sent out, the number of recipients or the total payment it has received for this service. It also did not respond to a request for comment on whether, in light of the Facebook / Cambridge Analytica story and the broader societal reckoning it has spawned regarding privacy, whether it would reconsider the program or switch to an opt-in program that requires customers to explicitly decide to participate, rather than silently enroll every customer in its program. Until last week, Facebook actually purchased data from a number of third party data brokers, including those that offer data points for targeting advertisements to specific medical conditions. Facebook itself has historically officially prohibited advertisers from targeting specific medical conditions or mental health. When asked how it reconciled its official guidelines with leaked reports of secret internal research that would permit advertisers to target young children suffering from depression, suicidal thoughts and other mental health conditions, the company declined to comment. Perhaps most tellingly, when asked last year whether Facebook would commit as a matter of corporate principle that it would never permit advertisers to begin targeting its two billion users based on their medical and mental health histories, the company declined to do so. I reached out to one vendor that uses data from some of the same brokers as Facebook and offers a list of 4.5 million names of “donors to charitable causes that have self-reported ailment concerns, purchased related health products and/or subscribed to related publications” and asked how it ensures the accuracy of its products and any specific restrictions it places on those that purchase its list. For example, whether it explicitly prohibits companies from purchasing its lists to screen their employees or employment prospects to identify those that may have particular ailments. The company refused to provide an on-the-record response to any of the questions I posed, reflecting just how hard it can be to learn anything about the secretive industry. The central thesis of the Cambridge Analytica story was that our Facebook data so strongly reflects our psychological characteristics that it can be used to develop models that can sway us towards or against a certain viewpoint. It is at best unclear the degree to which Facebook data is actually that powerful. At the same time, far more sensitive and intrusive medical information is not necessarily as well reflected on Facebook. If you wake up one morning with a rash on your buttocks, your first reaction is probably not going to be to post a photograph of it on Facebook and describe its appearance and symptoms in nauseating detail to all your friends and followers. Facebook tends to be the place we present a highly curated utopian portrait of our perfect selves. On the other hand, you’re likely to research that rash, fill a prescription for it, subscribe to mailing lists and magazines if it is a symptom of a chronic disease and take other actions that the data brokerage industry will capture and commercialize without you ever being aware. Putting this all together, Facebook knows the perfect airbrushed you that exists only in fantasy – it is the data brokers and data repurposers like Walgreens that know the real you behind closed doors. Yet, they have been largely absent from the societal conversation of the last few weeks about the privacy of our data and our rights (or lack thereof) to control what companies are allowed to access and do with our information. In a world in which our most intimate medical ailments are merely commodity data points to be bought and sold about us, making money for everyone but us, is the Facebook story really the most important one or should we be talking about just how many companies today "own" our digitized selves?
bfd0ede85eb2487f4219917e39192291
https://www.forbes.com/sites/kalevleetaru/2018/04/12/is-facebooks-new-academic-initiative-even-more-frightening-than-its-own-research/
Is Facebook's New Academic Initiative Even More Frightening Than Its Own Research?
Is Facebook's New Academic Initiative Even More Frightening Than Its Own Research? An art exhibition at the Big Bang Data exhibition at Somerset House in 2015 focusing on privacy and... [+] data. (Peter Macdiarmid/Getty Images for Somerset House) Despite enjoying absolute rule over the informational lives of its two billion citizens, controlling what they see, what they say and what they understand of the world around them, Facebook has remained steadfastly a black box over its 15 years of existence, refusing to provide true substantive detail over nearly any of its operations and offering transparency in word only. On nearly every question, from its role in the distribution of “fake news” to its influence over national election outcomes, there is precious little external information to evaluate the company’s claims - the world must simply blindly trust the company’s official statements. Thus, it was with great excitement that Facebook announced this past Monday a new academic research initiative that will grant external independent academic researchers access to Facebook’s data to study its role in elections. However, while on the surface the new initiative offers tremendous potential, the reality of its implementation is frightening beyond anything Facebook’s own researchers have devised to date and has profoundly dangerous implications for the future of privacy and democracy globally. When I first read Facebook’s announcement and the corresponding academic whitepaper on Monday, I was struck by just how little actual detail was available on the specifics of the program. While corporate press releases do tend to focus on hype rather than detail, academic whitepapers outlining new data programs are typically lengthy treatises that cover all the most common questions and concerns and offer sufficient detail to fully evaluate the program’s merits and ethical considerations and concerns. Yet, notably in a moment of societal reckoning around privacy and control over our information and for a company that finds itself in Congressional crosshairs precisely over questions about its undue influence over elections, it was quite simply shocking to see how little concrete detail regarding these issues was available. Indeed, the whitepaper itself read more like a corporate press release than an actual academic overview. To learn more about the specifics of the program, I first reached out to Facebook for comment on the program. Specifically, I asked the company whether academic researchers would be granted access to private user communications, including intimate private photographs, videos, Messenger chats and posts that have strict permissions limiting their access to friends only. Given the initiative’s emphasis on replication datasets, I also asked whether Facebook would now be permanently archiving content that users delete, eliminating the ability of users to truly delete content from the platform. Finally, I noted that the language of the new initiative suggested that it would be conceivable for American academics to actively manipulate Facebook’s interface and algorithms during a national election in a foreign country in ways that could influence its outcome and asked what protections Facebook planned to enforce to mitigate this scenario. Predictably, the company did not respond to multiple requests for comment, which seems to be its standard practice when it comes to any questions regarding its stance on user privacy protections and research. Yet, at the same time, the company’s reluctance to comment on the new initiative, even as it responded to requests for comment regarding other initiatives, raises concern regarding its transparency around an initiative that is itself designed to promote transparency. Most concerningly, given today's extreme sensitivity to user privacy, user control over their data and foreign manipulation of elections, it would seem Facebook should have had ready answers to these questions to reassure its two billion users of the safety of this new program. Responding with silence to the question of whether academic researchers will be able to access two billion people’s private communications is not the best start towards ensuring transparency and trust of the new program. The new initiative will be stewarded by the Social Science Research Council (SSRC), yet the Council’s announcements relating to the initiative similarly lacked substantive detail. Only one FAQ question offers any detail on data protection, stating that with respect to “what kind of data will be shared with researchers” the answer is “only anonymized Facebook data will be shared, although the specific types of data are to be determined.” The lack of any explicit mention that research will be limited only to public posts would seem to reinforce that researchers may be granted access to users’ private communications that they would never have dreamed would be handed over to academic researchers all over the world for access. To gain a more definitive understanding of Facebook’s program, I reached out to SSRC President Alondra Nelson for comment. Her overarching response to all of my questions? That “the short answer to most of them is TBD. The next step in the process is to convene the independent steering committee of scholars that will explore and decide how to move forward on the kinds of questions and issues you raise.” Throughout Zuckerberg’s Congressional testimony this week one of the most common undercurrents was that of a company whose philosophy to privacy and trust was to rush wildly ahead and deal with issues only when they caused a public outcry. Indeed, it seems that attitude has been mirrored in this new academic research initiative. Yet, the fact that some of the most critical overarching issues have been left open for debate and evolution over time, rather than closed from the start as a matter of ethics and policy poses grave concern regarding just how seriously the initiative takes user privacy and especially its approach to balancing the rights of researchers ahead of those of Facebook’s two billion users. The “replication crisis” has become a rallying call in the academic world for improved documentation and data access standards to ensure that data-driven research can be rigorously and robustly replicated by other scholars. Thus, it is no surprise to see replication considerations mentioned prominently in the new initiative’s whitepaper. Yet, this raises the question of just how replication is achieved in practice, rather than in theory, for a platform in which users can readily delete content? Imagine that a year from now an academic study is published in a prominent journal to much fanfare and press coverage, that was based exclusively on mining user’s private Messenger communications to automatically identify individuals in repressive regimes that are likely to take to the streets to protest election fraud. These highly vulnerable activists and their contacts might rightfully realize that, armed with this new insight, their government might take legal action to access those same private communications and apply the same algorithm to identify potential activists and arrest, torture and execute them before they can pose a problem. In reaction, activists across that country might mass delete all of their Messenger communications relating to the election, systematically eliminating at scale much of the data underlying the original study. In terms of study replication, this means Facebook has only two options. The first is to accept that replication datasets will be incomplete and that high profile and controversial studies, the very ones for which replication access is the most important, will have the greatest potential of being rendered unreplicable by systematic content deletion. The second option is for Facebook to permanently archive all deleted content in the name of replication, meaning that even if you delete a private post, if a researcher used your post in their study, Facebook would “delete” the post from user access, but retain a copy in its academic replication archives, potentially requiring it to change its terms of service and placing it in potential violation of data protection laws. Even without systematic deletion, the rise of the EU’s GDPR regulations and other privacy efforts throughout the world, coupled with increasingly intrusive government surveillance of the online sphere, could lead citizens in certain countries to be more aggressive at deleting content. In short, given that Facebook is not an append-only platform, any efforts to enable replication will necessarily grapple with how to handle deleted content. When asked how SSRC intended to address the deletion question with respect to replication, President Nelson answered that it was “TBD, the next step in the process is to convene the independent steering committee of scholars that will investigate and decide on those kinds of questions and issues.” Setting aside replication, will researchers will be restricted to accessing only public posts? Or will they be granted access to study all of our private content, including photographs, videos, Messenger chats, posts, etc? SSRC’s response was the same as above, that it was TBD and that its steering committee would decide whether private content could be researched. When asked whether access to private user content would be restricted to so-called “non-consumptive” analysis in which only computer algorithms are permitted to process private content, with technical safeguards ensuring that at no time can any human see the restricted content, or whether researchers will be permitted to freely access private user content themselves, the answer again was TBD. It is bad enough that Facebook’s own employees conduct research on our private posts and communications, but the idea that academics across the world could now be granted to our most private and intimate communications on Facebook, including reading our private correspondence, browsing photographs of our children, reading our friends-only posts, even in anonymized form, stretches belief. Moreover, as a commercial company, Facebook is subject to certain restrictions and is constrained in its actions by the knowledge it can be sued or subject to sanctions for particularly egregious misuse of user data. Many universities, at least in the US, enjoy so-called “sovereign immunity” protections and other special legal treatment that a number of institutions have emphasized to me in my previous coverage of the state of data-driven research ethics that afford them far greater latitude in the kinds of ethically questionable research they can perform and affords them immunity for many kinds of activities that Facebook’s counsel would likely prohibit due to legal jeopardy. In short, while it might on the surface seem ok to trust university researchers across the world with your private data, Facebook is actually subject to far greater restrictions on just how badly it can misuse your data compared with academia. Indeed, it's worth pointing out that the entire Cambridge Analytica story that prompted our current privacy reckoning was born of an academic research project and the notion that our data can be used for research for which we did not offer informed consent. After all, users may have legally consented to their data and their friends’ data being harvested, but the crux of our current societal conversation is that legal agreement does not equate to informed ethical consent. If your private Facebook data can now be used by academics all over the world for their own research without you ever knowing your data has been used or having any opportunity to opt-out of that research, what hope do we have left for privacy? Moreover, by institutionalizing consent-free and notification-free use of users’ private data, we normalize the current cavalier attitude towards private data ownership, control and consent. While legally the new initiative is certain to find loopholes and arcane legal interpretations of the EU’s GDPR and other evolving privacy standards to sidestep the laws’ privacy protections, as the spirit of these laws is repeatedly violated, it does raise the question of whether future regulation might explicitly codify consent protections that would constrain the current initiative. Reading through the new initiative it is remarkable to see its zero-notification, zero-consent, zero-control approach to private data in comparison to the absolute control, absolute consent, absolute notification model of Estonia’s genomic initiative, in which users must explicitly authorize any given research use of their data and can see if the authorized project eventually did use their data. Indeed, the word “consent” makes not even a single lonely appearance in the project’s whitepaper, while "permission" appears just once - with respect to Facebook's rights. When I’ve asked Facebook in the past whether it would consider allowing its two billion users to explicitly opt-out of having research performed on their personal data, the company has not responded. Each time I’ve asked it to state for the record that it embraces certain vision statements regarding user privacy (for example that it will not change its policies to permit the use of private medical information to sell ads in the future), the company has declined to do so. It is ironic indeed that the same academic community that criticizes Facebook for its opt-out privacy standards and informed consent-free research practices has developed its own research initiative that quite literally copies Facebook’s approach and institutionalizes it under the imprinteur of SSRC, top academics and funding agencies. Of course, SSRC is careful to emphasize that all access to user data will be in “anonymized” form. But, what precisely does “anonymized” mean? Merely with the username redacted? Reidentification is increasingly difficult to entirely prevent in today’s mosaicked world and many forms of anonymization can be readily defeated. Given the incredibly intimate and revealing portraits of ourselves we reveal in our private Facebook communications, what exactly would reasonable best-effort anonymization look like? Facial blurring in photos and videos and voice distortion in audio channels? Redaction and rewriting of user posts to eliminate PII? After all, a long and detailed private message could easily provide sufficient unique detail to immediately recognize the users involved even with all other information removed, while a photograph could contain sufficient background information to uniquely identify the place and time it was taken and the identities of the individuals in it, even with their faces blurred. When asked about this, SSRC said it would provide comment, but had not done so by publication time. Passive examination of user data, even private user data, is at least limited to post-mortem analysis of historical events. Studying an election after the fact through the eyes of citizens at the time can yield fascinating insights into contemporary communicative patterns, but ultimately is limited by the inability to ascertain the actual impact of specific behaviors or information. Towards this end, the SSRC initiative does not appear to explicitly limit itself strictly to such passive historical analysis. Indeed, it offers that it “will be entirely prospective … on upcoming elections – including in Mexico, Brazil, India and the United States” and does not seem to exclude active mass modification and manipulation of Facebook during a national election to tease apart how specific changes affect voting and electoral behavior. The notion of active manipulation of Facebook’s algorithms and interface with direct and real impact on voting behavior is not an idle concept: it was actually done eight years ago. A Facebook collaboration with academic researchers claimed to have directly led to an additional 340,000 people going to the polls in the 2010 US congressional elections. With no explicit ban from the beginning on active manipulation, coupled with the immense interest in active live societal-scale experimentation among the social sciences, it is inconceivable that the new initiative will not be inundated with active manipulation proposals. From algorithmic tweaks to active suppression or overexposure to selected news or views to interface changes, I’ve personally heard myriad proposals from academic researchers as to experiments they would love to run on Facebook relating to elections and social movements. Even subtle changes, like selectively slowing certain users’ connections to make the site run slower for them and discourage their contribution or consumption of content during certain periods or targeting certain demographics or political leanings can have huge impacts, yet be difficult for users to detect and are methodologies proposed by academics and repressive regimes and things we are likely to see proposed here. The notion of foreign researchers actively interfering in another nation’s election outcome by manipulating the Facebook environment of a nation’s voters in systematic ways with the intent of testing whether specific changes could change the election results is frightening beyond belief. Imagine an academic study conducted by Kremlin-connected Russian scholars during the 2020 US election campaign that has the stated intent to assess whether certain algorithmic changes to Facebook could increase the likelihood that voters reelect Donald Trump – one could imagine the public and policymaker outcry. Moreover, consider that the very allegation of election manipulation using Facebook data by Cambridge Analytica (though the company has steadfastly refuted all allegations) has led to public outcry and multiple governmental and legal inquiries. Now imagine myriad academic projects all over the world seeking to perform similar active manipulation on every upcoming election globally. No election would escape scrutiny, potentially with dozens of teams conducting experiments in parallel attempting to influence every conceivable element. Without the legal jeopardy and scrutiny of being a US or UK commercial entity and protected by the sovereign immunity and taxpayer-provided legal representation of many universities, there is unlikely to be any limit to the kinds of proposed electoral interventions. Could we ever have free elections again? When asked about these questions and specifically whether it would be prohibiting active manipulation proposals, SSRC again noted that they were all TBD, with the steering committee deciding what kinds of active interference in elections will be permitted, meaning those rules are likely to evolve over time to allow greater and greater flexibility in the kinds of manipulation permitted. Any research initiative that involves academics in one country, especially the United States, studying and potentially interfering with the political and social affairs of other is bound to evoke discussion of Project Camelot. While the following half century of academic-military social science funding and collaborations have largely normalized the concept of direct military funding of academic research into the social systems of foreign countries of direct immediate tactical interest to the US, few such projects have ever contemplated the data access and manipulation capabilities envisioned under Facebook’s new initiative. While initially all projects under the new initiative will be directly funded by a pool of independent funding agencies, this leaves open the question of governmental influence in shaping projects of particular interest to the US Government and prioritizing those for submission to the initiative. Given that many academics likely to submit to this program have either received funding from the US Department of Defense directly (including DARPA/IARPA/ARL/NRL grants) or indirectly (through the myriad contractor agencies used to facilitate DOD funding to DOD-adverse areas of the social sciences) or actively collaborate with those who do, there is an extremely high likelihood of US Government priorities finding their way into the proposals submitted to Facebook’s initiative. For example, take a US university research lab that has received DOD funding to build computer models to forecast elections in Latin America which already makes use of public Facebook posts in its forecasting model and has identified that private Facebook posts would greatly increase the accuracy of its model. It is not hard to imagine that lab submitting a proposal to Facebook’s new initiative to boost the accuracy of their models by using the wealth of private Facebook posts not currently accessible to them. While the project would be entirely independent of DOD and funded by one of the initiative’s funding agencies with no contact or discussion with DOD, the actual goal of the research would have been seeded by DOD's specific interests in the previous work and the specific experimental design, inputs and output indicators aligned with the lab’s current research focus, which has been shaped by DOD's interests. In short, academics tend to build on their successes and frequently submit new grants that build on specific elements of previous work, meaning a single large funded project can shape a lab’s research focus for quite some time – that’s just the reality of academic funding. Given the reality of the ongoing dialog between DOD funding program officers and the academic labs they fund projects in, it is difficult to imagine that there won’t be at least some level of overt discussion of projects DOD would like to see submitted to the initiative, potentially even with suggestion of future DOD funding in the case of successful outcomes. Looking globally, it is nearly a certainty that every country’s intelligence services will be discussing projects of interest with academics in their country to submit to the new initiative. Without an explicit ban or extraordinarily forward-looking set of policies, it is nearly impossible for the new initiative to not include proposals aligned with US or other government interests in the electoral affairs of other countries. Without a ban on active manipulation techniques, governments could easily exploit Facebook’s initiative to essentially live pilot test new electoral manipulation techniques on the actual production Facebook platform, all while laundered through the veneer of academic research. As with nearly every other question, SSRC’s response was that all such concerns and any policies relating to them were TBD. It was especially notable that SSRC had not considered in detail sufficient to have a prepared response such obvious concerns that the initiative could be exploited by governmental interests to interfere or gain intelligence considerations of foreign elections by virtue of their funding and collaborative interaction with many of the academic labs likely to submit proposals to the new initiative. There is also the question of just how much independence the initiative will have to accept research proposals that could portray Facebook in a negative light. One of the criticisms of Twitter’s historical approach to users of its commercial firehose and decahose products has been that all use of its commercial products had to be preapproved prior to work commencing and that Twitter had absolute authority to reject any use of its data. Under the Facebook initiative, once a project is approved, Facebook will not be afforded prereview or rejection authority, meaning research outputs will be published even if they are highly critical of Facebook’s practices or impact. However, the initiative does provide for Facebook to review projects before they are approved for security, safety and privacy concerns. This leaves open the possibility that the SSRC-appointed independent committee could approve a research project that Facebook then rejects, officially on security or privacy grounds, but in reality, because company engineers conducted an initial test of the project’s hypothesis and found that it would generate negative publicity for the company. How would such disputes be resolved, especially in cases where the committee disputed Facebook’s rejection? SSRC said it would provide a response, but has not done so by publication time. Underlying the above concerns, there is the critical question of just how the ethical and moral oversight of the new initiative will be enforced. As I’ve written extensively, the current academic model of the Institutional Review Board (IRB) that the new initiative plans to rely upon is deeply flawed. The overwhelming majority of data-driven research today is never subjected to actual ethical view, being exempted by IRBs under public use or other exemptions. At the same time, even institutions with some of the most rigorous and restrictive ethical oversight processes do not appear to evenly enforce them, waiving requirements of privacy reviews and legal oversight or the use of stolen personal data in practice, even while publicly touting the extensive and rigorous nature of their review processes on paper. In short, the paper rules are not always the actual rules. Even top journals in fields that have pioneered modern ethical consent have accepted papers using Facebook data for highly sensitive purposes targeting vulnerable populations without any notification or informed consent to users of the repurposing of their information, firmly entrenching the notion that data-driven research should be evaluated against vastly more relaxed standards than other forms of research and that public understanding of IRB approvals can be very different than the reality of how universities actually implement those ethical review processes when it comes to data analytics. When asked whether SSRC believed its new initiative would address these concerns, perhaps through the institution of a new kind of ethical review process designed to update the IRB for the digital era and which moved ethical review back to the forefront of research design, rather than as a last stage obstacle to be overcome or waived, the organization once again offered only that everything regarding ethical considerations of research under the initiative was TBD. It did add that “SSRC-appointed review committees will actively engage with technologists, advocates, and ethicists to develop 21st century academic standards for anonymized digital data use.” However, given the sad record of our current IRB model and the strong push in the data-driven social sciences to relax IRB protections even further, coupled with the general consensus among the institutions and researchers I’ve spoken with that the current extremely lax exemption-heavy IRBs still have far too much control, it is unlikely that any outcome will yield greater safety and privacy for users – precisely the opposite. Notably, every response SSRC offered deferred to the future judgment of its “independent steering committee of scholars.” This small group of individuals apparently will decide the digital fates of two billion people and the right of the world’s democracies to conduct their elections free of external interference. When asked just who would be selecting the membership of the committee and what commitments SSRC had to ensure demographic, geographic and other diversity, the organization said it would provide comment, but had not done so by publication time. For such an immensely powerful group of individuals that will have such an outsized influence over the privacy and rights of two billion people and the countries they live in, it is remarkable that the organization did not have a statement ready regarding how it would pick those members and their required qualifications. In short, nearly every element of the new initiative, bearing on some of the most sensitive elements of societal concern today, from foreign interference in elections to loss of control of our data and lack of informed consent or ability to stop what we consider misuse of our most intimate information, are all “TBD.” None of these questions are unexpected or relate to obscure arcane issues of little relevance to the initiative. On the contrary, they each focus on the most central and critical issues facing modern society surrounding how our data is used and the ways in which legitimate research can veer into the realm of misuse and manipulation. Putting this all together, for an initiative publicly touted by Facebook as a way to address concerns of its platform being used to influence foreign elections, it is beyond belief that Facebook did not expressly codify in the opening charter statement launching the initiative that active manipulation would be prohibited, that private user information would be banned from consideration and that users would have control and consent over their information. Has Facebook learned nothing from the past few weeks? It was bad enough being Facebook’s lab rats, but now all two billion of us are being made into lab rats for hire for academics all over the world, while those same researchers have now been granted open season to manipulate our elections and digital lives all in the name of grant funding and publications. For those that were worried about their digital privacy and the sanctity of their democratic processes before, you haven’t begun to see what will happen in Facebook’s new academically-powered Orwellian world.
11f95f11baddf6667318292dce1c95a6
https://www.forbes.com/sites/kalevleetaru/2018/08/09/academics-continue-their-attacks-on-facebooks-new-privacy-rules/
Academics Continue Their Attacks On Facebook's New Privacy Rules
Academics Continue Their Attacks On Facebook's New Privacy Rules Facebook logo. (Jaap Arriens/NurPhoto via Getty Images) As I’ve chronicled over the past few years, it has been remarkable to watch how the ethics rules and privacy protections governing academic research, expanded and strengthened over the decades to protect the public from rogue academic researchers, have been torn down in the space of just a few years as the new “big data” era has proved too tempting for academics to allow something as trivial as ethics or privacy get in the way of all the incredible ways they can manipulate and exploit the public. It is all the more remarkable how quickly that same academic community that fiercely and steadfastly condemned the commercial world’s data ethics standards suddenly dropped all of their concerns and rushed to adopt those same standards when offered the opportunity to join the commercial world in its exploitation of private personal data without informed consent or the ability to opt out of research. An open letter from the academic and journalism community earlier this week continues the field’s assault on the few remaining obstacles to the unfettered ability to exploit and manipulate the public. After demanding for years that Facebook adopt more stringent data ethics and privacy protection standards, the academic community rose up in unison to condemn the company when it actually followed their advice earlier this year. In a joint letter this past April, a wide-ranging community of academics fiercely attacked Facebook’s new efforts to protect user privacy, arguing that any attempts to protect user privacy would interfere with their ability to mass harvest personal information without consent and against the will of users. Naturally, the academic community fought back against Facebook’s new technological and legal efforts to restrict mass harvesting of data and covert manipulation of users and have worked hard to develop workarounds to enable them to continue their work regardless of Facebook's rules. The community’s latest salvo is an open letter demanding that Facebook formally amend its terms of service to exempt academic researchers from the company’s efforts to protect its users, arguing that academic research takes priority over users’ desires to protect their personal information from being unwillingly harvested or having themselves manipulated against their wishes. The proposed “safe harbor” concept would for all intents and purposes exempt journalists and “research projects” from Facebook’s prohibitions on mass harvesting of user data and the creation of false accounts and posting of false content to the platform. The only restriction is that projects must involve “matters of public concern” but the definition of what constitutes such projects is left to the researcher or journalist themselves to decide. The letter notes that it offers only a template of what such an exemption might look like, but it is notable how broad of an exemption the authors look for, covering both the mass harvesting of personal information and the creation of false accounts and false content to actively manipulate the service. The letter even permits researchers to effectively dox private individuals, so long as the researcher believes that the person has “engaged in serious unlawful activity” (with the researcher being permitted to make their own determination of such, rather than requiring them to receive external verification first). The proposed amendment would prohibit researchers from selling, licensing or transferring the data they harvest to select entities, including data brokers, but does not prohibit simply giving away the data to other commercial entities, including faculty startups. Commercially funded academic research, in which a private company funds research at a university, including applied research with immediate commercial value, is also permitted under the proposed rules. A Facebook spokesperson declined to comment further beyond pointing to the company’s previous statement that the company had received the letter but would not immediately commit to adopting any of its recommendations. Of course, even with these modified terms of service, would it really matter? Lest we forget, previous Facebook rules that strictly prohibited academic researchers from bulk transferring their legally harvested data to outside commercial entities didn’t stop an academic from shipping his archive of Facebook data to Cambridge Analytica. More to the point, do terms of service even matter anymore and does the academic community even view them as legally or ethically binding? Certainly, the prevailing view in academia has turned against honoring such legal contracts. As a member of Social Science One’s Civic Engagement Committee put it recently “I have articulated the argument that ToS are not, and should not be considered, ironclad rules binding the activities of academic researchers. … I don't think researchers should reasonably be expected to adhere to such conditions, especially at a time when officially sanctioned options for collecting social media data are disappearing left and right.” In short, as social media companies react to the public’s calls for greater privacy by taking steps to deter mass harvesting, academia has responded by simply ignoring those restrictions. This raises the key question: if Facebook adopted the suggested safe harbor exemption recommended in this latest letter, which would permit academics to mass harvest Facebook data in exchange for promising not to resell it, why would they bother complying with the rules against reselling the data they harvest if they don’t view the terms of service as applying to academics? Put another way, the proposed changes to the terms of service would permit academics to legally mass harvest Facebook data so long as they promise not to resell it. However, if prominent academic leaders serving on Social Science One’s own committees argue that those terms of service don’t apply to academics in the first place, then what’s to stop academics from reselling all of the data they harvest if Facebook starts explicitly permitting legal mass harvesting? In the end, it is both remarkable and frightening how fast nearly 40 years of modern research ethics guidelines have been thrown away by the academic community in their rush to cash in on the modern data gold rush. The very academic community that had relentlessly condemned the commercial world for years over its data practices now wholeheartedly embraces those very practices with abandon while condemning the companies’ new attempts to actually provide user privacy protections and enforce ethical standards on data use. It seems that perhaps one of the greatest hidden impacts of the big data era is that it marked the end of academia’s interest in research ethics, informed consent, the ability to opt out of research and the right of ordinary people to privacy.
c3fd0240fed9dc64ed16443705e95175
https://www.forbes.com/sites/kalevleetaru/2018/09/10/why-facebooks-new-user-trustworthiness-scores-are-so-frightening/
Why Facebook's New User Trustworthiness Scores Are So Frightening
Why Facebook's New User Trustworthiness Scores Are So Frightening Last month the Washington Post reported that Facebook is assigning “trustworthiness” scores that rank the reputation of each of its users. Among other applications, the company acknowledged using the scores to determine whether posts flagged by users as false should be forwarded to fact checkers for review or whether the user’s concerns should simply be ignored. In contrast to its public pledges of transparency, the company is keeping its user rankings shrouded in absolute secrecy, acknowledging their existence, but refusing to comment on how they are determined or how they are used and declining to permit external evaluation for concerns of racial, sexual or other demographic and cultural bias. What happens as Facebook’s new ranking system becomes normalized and perhaps even exported as a service for other companies and governments? Facebook’s new efforts are remarkably similar in scope and focus to China’s social credit system. As with China’s efforts, Facebook seeks to use the vast array of behavioral and other indicators that it has long used to profile users for advertisements, but now to explicitly utilize those profiles to rank and score its users and grant them different privileges and rights depending on those scores. While alluding to other users, Facebook discussed only the application of its scores to its efforts to combat “fake news.” In its current instantiation, the company acknowledged using the scores to determine whether to ignore actions taken by a user to flag posts they believe to be false. A user with a high trust rating will have their flagged posts and news articles forwarded to third party fact checkers for review, while those with low trust ratings will be ignored. Given the prevalence of “opinion checking” and circular referencing in the major fact checking sites, this creates considerable potential for reinforcement bias. However, it is the applications beyond simple fact checking that are what yield the greatest concern over Facebook’s new user ratings. Governments and companies throughout the world have always looked for ways to rate their citizens. Some countries like China have invested heavily in creating a state-of-the-art technological dystopia, while others defer to more basic measures such as interactions with the criminal justice system, credit ratings and other basic metrics. It is not hard to imagine Facebook’s new ratings catching the attention of governments across the world. Why build your own massive surveillance and rating system when Facebook is doing all of the work for you and has access to data your government can only dream of acquiring? Facebook has already been exploring the idea of harvesting its massive archive of facial data for its two billion users and performing realtime facial recognition using surveillance cameras to recognize users as they walk about the offline world. It even went a step further and envisioned offering a “trust” score to retail stores that would flag which individuals walking around its store could be trusted with high value merchandise and which security should follow closely. When asked whether the company would commit on the record as a matter of principle never to offer such mass facial recognition as a commercial service to other companies and governments for surveillance, intelligence and military applications, the company declined to do so. The company also did not deny that foreign governments have used court orders to demand lists of their citizens that the company’s algorithms have determined fall into specific categories, such as being likely homosexual or interested in anti-government topics, including in repressive regimes where such labels could yield the death penalty. As the company offered, the interests of advertisers to precision target individuals outweighs the right of those individuals to be protected from categories that could cause them to be arrested, tortured or killed. Unlike China’s country-specific system that is limited to its own citizens, Facebook has the unique ability to apply its ratings to more than a quarter of the population of earth. It is almost a guarantee that governments will be knocking with court orders to receive those scores for their citizens to augment all of the personality and behavioral indicators they are also likely to request. While companies have long calculated “influencer” and other related scores for social media users, those scores were used primarily by marketers to identify accounts to pitch their products or ad campaigns to. A poor score would yield at most a lack of commercial sponsorship offers. Facebook’s scores, on the other hand, explicitly penalize users, cutting low scoring users off from the rights and privileges afforded others. This raises the critical question of what kinds of hidden biases may be lurking in Facebook’s new ratings. Silicon Valley’s workforce is extremely narrow both demographically and experientially and has been historically blind to its inability to see beyond its own biases. For years the company refused to release any aggregate demographic statistics regarding its content moderators, including how many moderators spoke each language. Instead, the company kept assuring the public and policymakers that it had the necessary number of reviewers for each language and culture and that they should just trust it. When the company’s failure to adequately moderate content in Burma helped fan violence there, the company’s predictable response was that it thought it had plenty of reviewers and that no one could have imagined or predicted that its handful of reviewers wasn’t enough. Similarly, after continually asserting that its Trending Topics module was entirely unbiased and that the company was fully representing the geographic diversity of its user base, when the company was finally forced under immense pressure to release its media source list, unsurprisingly the continent of Africa was almost entirely absent. It wasn’t that Facebook purposely biased its system against Africa, it was that its Trending Topics staff and the management overseeing them were unable to see past their own implicit biases to recognize there was a problem. As with any kind of user “trust” rating, Facebook’s new scoring system is vulnerable to myriad possible biases, especially given the unknowns regarding the full set of signals used to calculate them. The company said that it could not comment on the signals it uses to compute its scores because doing so would allow users to game the system. Yet, that itself raises serious questions about the robustness of Facebook’s system if it can be so easily gamed. A system that accurately and holistically assesses a person’s actions across the entire spectrum of their interactions with Facebook’s properties would not be so easily gamed, given that even changing a large number of behaviors would not affect the totality and momentum of their score. Moreover, even if Facebook was legitimately concerned that its algorithms were in fact much weaker than it wanted the public to know and was afraid that they could be so easily gamed, it could at the very least bring in an outside team of experts to evaluate its algorithmic inputs for bias. Such an expert panel would not help malicious actors in any way learn ways to game the system and would offer the public and policymakers at least some basic assurances that the algorithms were not so biased as to directly penalize specific races, genders or cultures. Yet, when asked whether the company would at least commit to such an outside review of its rating algorithm, the company unsurprisingly declined to do so. In the end, we see in Facebook a company in the midst of a transition from a passive data archive that hoovered up and stored everything it could acquire about us into an active profiling services company that is mining all of that data to build profiles of us that can be used for far more than just advertising. From offline mass surveillance facial recognition to “trust” ratings, we see a company that puts even China’s mass surveillance and societal-scale profiling ambitions to shame.
054b641096f68809d86f578e4691a00f
https://www.forbes.com/sites/kalevleetaru/2018/10/09/can-we-finally-stop-terrorists-from-exploiting-social-media/
Can We Finally Stop Terrorists From Exploiting Social Media?
Can We Finally Stop Terrorists From Exploiting Social Media? Photo ilustration. (Jaap Arriens/NurPhoto via Getty Images) Social media has a terrorism problem. From Twitter’s infamous 2015 proclamation that it would never censor a terrorist to Facebook’s long delay in adopting signature-based content blacklisting, social media has become a critical inadvertent ally in helping terror organizations throughout the globe recruit, communicate and promote. The platforms themselves have been slow to respond, initially rejecting calls to remove terrorists from their walled gardens, before reversing and aggressively embracing the idea of purging violent users. However, for all their public discourse, the platforms have taken little concrete action, reflecting both the economic realities that they have little incentive to invest in content moderation and the real-world complication that deleting terrorist content requires understanding context, not blindly deleting any post with a given keyword. Over the past few years the major social platforms have rapidly evolved from defending free speech at all costs towards recognizing that the public and policymakers do not take kindly to their systems being used to help encourage, support and direct violence. Towards that end, most platforms have centralized on a two-pronged approach to countering terroristic use of their tools: human review and automated blacklisting of content that has been previously deleted by a human reviewer. Platforms are also experimenting with a third category of automated scanning that can, in theory, flag novel posts that are likely encouraging or promoting terrorism and route those for human review, but for the moment those are largely limited to a few platforms and focused nearly exclusively on textual content. Human review is the gold standard of content moderation, but has many drawbacks, chief among them the very limited scalability of using even tens of thousands of humans to attempt to moderate billions of daily posts. The difficulty of staffing sufficiently large teams of moderators that represent all of the languages and cultures using social platforms makes comprehensive human moderation an all but impossible task. As Facebook discovered in Burma, a handful of moderators cannot begin to cope with the firehose of content from any given location on earth. To effectively understand the context and symbolism of the content they are reviewing, moderators must also have strong cultural roots in the geographic areas they are assigned. Assigning an American who has never left the country and learned basic comprehensibility of a language in school to be a moderator for that country’s posts has little chance of success. There is also a very real psychological cost in using humans to review horrific content. Historically, most companies relied on a post-review takedown approach, meaning that if 1,000 people all upload the same terrorist propaganda image, all 1,000 copies of the image would have to be individually reviewed by different moderators and deleted one by one. The same user could then simply reupload the image seconds later and it would have to be flagged and reviewed once again. When asked previously why Facebook did not use signature-based blacklisting to prevent reuploads of content deleted by moderators, the company would simply decline to comment. Growing public and governmental pressure finally forced the major social platforms to take concrete action, leading to a shared initiative in which the companies agreed to jointly implement signature-based blacklisting and share a joint list of image signatures. Under this model, when a moderator deletes an image, a unique signature or “digital hash” of the image is entered into a central database and used to prevent the image from being reuploaded in future on both that social platform and any others that are a member of the consortium. Signature-based content removal has now become the defacto standard in Silicon Valley for removing terrorist content. On paper, this kind of blind content blacklisting offers a computationally cheap and trivial solution that removes a considerable cross-section of terroristic content from circulation. Most importantly, it allows tech companies to argue that they are taking real steps towards curbing terroristic use of their systems. Unfortunately, blind signature-based content removal is more public relations ploy than effective counter-terrorism tool and in fact can do more harm than good in restricting public discourse about terrorism, from news media coverage of attacks to victims documenting the atrocities they endured to civil society groups launching CVE initiatives. Tech companies can rightfully claim they are preventing millions of reuploads of previously identified terrorist content, while sidestepping the far more important question of how to identify all of the new content being generated each day. For its part, Facebook now claims that 99.5% of the terrorist content it removes is through this signature-based removal and its other automated filters, reflecting that the majority of its success has come from removing preexisting known content, rather than new content being created every day. The signature database underlying these efforts is almost exclusively focused on ISIS and Al Qaeda content and consists of less than a hundred thousand pieces of content, a portion of it duplicate copies that have been slightly modified. Most importantly, signature-based blacklists fail to take into account the context in which a given post is shared. Imagine a new ISIS image that depicts the use of GPS-controlled drones to drop modified grenades without human intervention. The image would likely be initially shared by ISIS sympathizer accounts lauding the weapons’ lethality and thus flagged by human content moderators and added to the signature database. Subsequently, news outlets might use the image to discuss ISIS technological evolution and ways they could be subverted. Victims of the drone attacks might use the images to illustrate what happened to them. Civil society groups might use the images to condemn ISIS’ barbarity. Yet, all of these secondary uses would likely be blocked by blind signature blacklists that simply remove every subsequent use of the image, regardless of purpose. Context is everything when it comes to the meaning of a given piece of content. Signature-based approaches cannot distinguish between a post lauding a terror attack and one condemning it. Yet, Facebook has previously clarified that it is nearly exclusively relying on signature-based removal for the imagery, video and audio content that forms much of the propaganda output of today’s social savvy terrorist organizations. While Facebook has offered that it is using machine learning approaches to attempt to flag novel textual content, it has declined to comment on the accuracy of those tools, especially their false positive rates and has declined to commit to allowing external review of their accuracy. This has created a landscape in which much of the counter-terrorism work of the major social media platforms is based on simply preventing users from reuploading a small database of previously identified image and video content, along with machine learning experiments of unknown accuracy and efficacy. Signature based removal traces its roots to the automated systems designed originally to flag unauthorized reproduction of copyrighted content and the removal of child pornography. In the latter, there are no legal contexts under which such content can be shared for any purpose, including condemnation, making it easy to use context-free signature removal. Things are far more complex when it comes to subjects like terrorism, where it is not the content itself that is illegal or objectional, but rather the context in which it is used. Instead of blindly blocking all uploads of an image, platforms must look at the context of each of those posts. From my own experience processing half a billion images through Google’s Cloud Vision API and testing its Cloud Speech API’s ability to generate useable audio transcripts of ISIS videos with multiple users screaming in Arabic over gunfire and explosions, we have the tools today to sufficiently annotate novel imagery and video content to flag material that contains depictions of violence and to generate machine readable categorizations and transcripts that can be used to identify terrorism-related content. Tools like Google’s Vision, Speech and Video APIs and customized AutoML models can take an image and recognize the presence of terrorist organization logos, OCR text in 50+ languages to render any subtitles, textual overlays and background text searchable, identify the specific make and model of weapons in the scene, recognize specific uniforms and insignia affiliated with terror groups, convert audio narratives into searchable transcripts and even identify the presence of violence, catching a small group of blood droplets in the corner or distinguishing a gun sitting on a table from one being pointed at a person. Such tools are completely automated and extremely efficient, able to process an image in a fraction of a second and scale to billions of pieces of content a day. Moreover, Google’s Vision API is able to perform a reverse Google Images search on each image, identifying all of the locations it has seen the image on the open web in the past, along with the captions used in each of those cases. This means that even if an innocuous image is uploaded that does not on the surface appear to depict anything terrorism-related, Vision API can flag that when the image has appeared elsewhere on the web it has always been described using captions that mention ISIS and terrorism. Thus, an ISIS recruiting video that features smiling people, but which never actually mentions the group by name can be readily identified entirely automatically. Similarly, Google’s Vision API is able to recognize similar images, including images from which sections have been cropped to generate the image in question. Such tools can be readily expanded using image presegmentation to identify novel image composites built from cropped sections of previous images. Such composites will not typically be recognized by today’s signature tools, even if each source image is in the database. Recognizing that an image contains an ISIS flag or that a video mentions joining ISIS in the narrative allows tools to move far beyond current signature-based approaches towards recognizing novel content, but still does not solve the context problem. White listing recognized news outlets could help with the problem of an evening news broadcast including a clip of an ISIS video or a news article that includes an ISIS image as an illustration but does not address the broader problem of how to separate posts that laud violence versus condemning it. One promising approach is to combine statistical sentiment assessments with neural categorization models to yield a hybrid score for each piece of content that assesses both its focus on the violence or recruitment elements of terrorism and whether it uses a voice that appears to promote rather than condemn or clinically document. Again, the ultimate arbitrator should always be a human with a deep cultural background and understanding, but such tools could help flag large volumes of novel content that today remains beyond the reach of current approaches. Of course, even human reviewers do not solve the problem of deciding what constitutes “terroristic” speech. While most countries recognize ISIS and Al Qaeda as terrorist organizations, looking globally the problem is far more complex. Many independence groups have contested classifications, recognized by some nations as freedom fighters that are actively supported with funding, weapons and training, while targeted by others as violent terrorist organizations subject to sanctions and covert operations against their leaders. From personal experience running human classification projects in the past, getting a room full of subject experts to agree on whether a given post represents “terrorism” or a “human rights abuse” or “torture” can be nearly impossible, much as it might seem to the public and policymakers that such labels should be fairly straightforward to apply. Today it is the US that decides which groups are terrorist groups and thus should be removed, but will other countries demand a say in the future? For example, should all posts relating to Hamas be deleted globally from all social platforms due to its classification as a terror organization by some countries? Repressive regimes throughout the world are also increasingly exploring the use of counter terrorism legislation to classify government criticism as terroristic speech that they could then use legal mechanisms to compel the major social platforms to remove, much as DCMA has been misused in the past as a tool for stifling criticism. Unfortunately, one of the reasons that Silicon Valley has so quickly latched onto signature-based blacklisting is that it is ill-equipped to take more serious steps towards more robust content removal. The data sciences groups at most companies tend to draw heavily from a handful of countries, operate primarily in English and have little experience with the languages and cultures they are tasked with examining. I’ve seen counter-terrorism data science groups at major social platforms that were comprised nearly exclusively of English speaking Americans with strong statistical and programming backgrounds, but nary a single field-experienced counterterrorism expert nor even a single person who actually can read a word of Arabic. Validating models often consists of testing how well they fit against existing training datasets, with machine translation and limited outside SME experts brought in for spot checks. Even those groups lucky enough to have Arabic speakers on staff tend to draw from non-native speakers who learned Arabic in school, rather than recognizing that the Arabic speaking world is an incredibly diverse and culturally rich place. Just as one would not grab a random American and say they can speak for the views of the entire US, Silicon Valley must recognize that the web does not consist of “English speakers” and “the rest of the world.” Often, when meeting the data scientists assigned to build counter-terrorism tools, the first question one asks is how on earth their company expects such a group to meaningfully contribute to removing terroristic speech given their utter lack of qualifications. There is also little interaction at the data sciences level with the governmental field personnel who are on the front lines of identifying and analyzing terroristic speech and which can provide the most valuable insights into symbology and narratives and the trends they are observing. Cultural immersion is absolutely critical to understanding whether an innocent-seeming post actually says far more than it seems at first glance. Much as understanding Iranian political discourse years ago required not only an understanding of Farsi and advanced knowledge of Iranian poetry, but also a deep cultural immersion and understanding of the deeper meaning of each line, so too does effectively identifying and removing terroristic speech require more than an American who learned a few words of Arabic in school and has never left the US. Looking globally, ISIS and Al Qaeda are far from the most impactful terror groups in many parts of the world. Combatting terrorism speech globally requires working across nearly every language in use on the social platforms and deep cultural expertise spanning the globe. It requires studying all recognized terror groups and building libraries of their symbology and narratives across all of those languages. Instead of relying on a small central ISIS and Al Qaeda-focused database of images curated by the major social platforms, what if we created a new globally cooperative database of content contributed by governments, NGOs, civil society groups, academic and independent researchers and even ordinary civilians, documenting the visual, audio (such as music) and textual narratives used by each group and evolving patterns in their material? Such a database would pose unique oversight and curation concerns, but even if initially limited to major NGOs, researchers and civil society groups, could go a long way towards internationalizing terrorist content removal across terror groups, languages and narratives. It could also help expand filtering beyond public posts towards the vast archives of content the groups share through alternative channels. Much of the focus of current efforts has been on the removal of content from social media platforms. The question of encrypted communication channels has largely been relegated to the encryption debate and whether companies should be forced to provide backdoors to their products for law enforcement and intelligence use. Yet, the growing ability of even low power mobile devices to run complex deep learning models entirely on-device raises the possibility of blacklisting terrorist speech even in encrypted communications, by blocking it from being shared in the first place. One could imagine common encrypted communications tools building in basic filtering models that examine every message or file and prevent those relating to terrorism from being shared at all. Of course, bad actors will always find a way around such safeguards, but the more difficult you make it for terrorists or criminals to use such platforms, the more risk you introduce to their communications and the more they have to codeswitch between platforms and codes, making it more difficult to effectively conduct operational planning and recruitment over secure channels. Of course, self-censoring communications channels are a repressive government’s dream and asking major chat apps to build in models that scan all communications and block those deemed unacceptable would almost immediately bring calls from governments to extend the models to block calls to action for legal protesting or criticism of government. Similarly, as we get better about removing terrorist content and expand between ISIS and Al Qaeda, critical questions are raised about who decides which groups to silence and which to allow to speak? An even bigger question is whether we should be silencing terrorists at all. Allowing them to communicate in the open, while targeting their more secure channels yields a greater exposure surface through which to observe their activities, influencers and narratives. Pushing this activity underground makes it more difficult to track. Moreover, as China has taught us, it is often far more effective to encourage self-censorship by drowning out speech than by playing whack-a-mole deleting it.  Perhaps most effective of all is to take a page from the world of counterintelligence, creating a “wilderness of mirrors” in which terrorists and their followers and would-be recruits are no longer able to know who or what to trust, disrupting their information environment at a far deeper level that has lasting impacts on the ability of the organization to harness the digital world. In the end, there is little incentive for Silicon Valley to invest in removing terroristic speech. Such content compromises only a microscopically small fraction of the total volume of billions of social posts per day, while the economic costs of hiring enough human reviewers and building enough machine learning models to filter those billions of posts to find the small number of terrorist posts is staggering. Short of government regulation there is little reason for them to do more than public relations ploys like signature-based filtering. Fundamentally rethinking social media CVE, expanding beyond ISIS and Al Qaeda, looking across all the world’s languages and cultures and, most importantly, evaluating the context of each post, is simply not cost effective for the companies today to invest the necessary resources in. After all, for companies that can recognize a single person’s face out of billions in an instant, record every link we click and every post we look at, run ad auctions involving millions of variables billions of times a day and create digital dossiers that know us better than we know ourselves, it sure seems they could do a whole lot more than merely block reposts of a few tens of thousands of previously identified images and videos. Putting this all together, we have the tools today to do far more in countering terroristic use of social media and the broader web, especially harnessing deep learning approaches to identify novel content and its context both in a given post and across the web itself. The question is when will Silicon Valley finally decide to bring its immense capabilities to bear on its terrorism problem.
f5e2ba2bb18732ede723f5c806e93437
https://www.forbes.com/sites/kalevleetaru/2019/02/03/facebooks-continued-growth-reminds-us-its-now-too-big-to-deletefacebook/
Facebook's Continued Growth Reminds Us: It's Now Too Big To #DeleteFacebook
Facebook's Continued Growth Reminds Us: It's Now Too Big To #DeleteFacebook Facebook logo. (Jaap Arriens/NurPhoto via Getty Images) Getty As Facebook suffered privacy scandal after privacy scandal last year, press and pundits across the world prognosticated the company’s rapid downfall, its days numbered as the #DeleteFacebook campaign would result in the whole world abandoning the platform over privacy fears. In contrast, others like myself argued that Facebook has become so integrated into our lives, so intertwined with how we keep in touch, follow the news, get business and governmental updates and conduct our lives, that it has passed the point of no return: we simply cannot leave it no matter how much we would like. In turn, Facebook has taught us to rationalize that our digital privacy and safety are obsolete notions. As the company’s earnings for the end of 2018 remind us, it seems the company’s messaging is working and we simply no longer care about privacy. From its roots as what amounted to a college dating site into today’s 2-billion-user behemoth, Facebook has grown over the past decade and a half into an indispensable part of our lives. Even governments themselves increasingly use Facebook as a primary communications mechanism both to publish policy and operational information for their citizens and to hear back from those they represent. In some countries Facebook has become the internet itself, its walled gardens effectively defining the limits of access. As Facebook’s grip on our lives has tightened and it reaches ever more deeply and intimately into our online and offline selves, the company has become immensely profitable by monetizing our behaviors for advertising, selling access to advertisers and developers. Security and safety took a backseat to relentless growth, the company more concerned about maximizing access to user data than ensuring that developers did not misuse the nearly limitless access they enjoyed. As privacy breach after privacy breach was followed by security issue after security issue in 2018, the company weathered a never-ending stream of negative press. Yet, as Google Trends shows, even at its peak, US-based search volume about Cambridge Analytica last year was just slightly higher than September 2015 searches about Facebook privacy revolving around the rights of EU citizens and a fake copy-paste hoax. In fact, web searches about Facebook privacy have been trending steadily downwards since mid-2011, mirroring an overall decrease in search interest about the company as it has reached a saturation point. It has been just under a decade since Mark Zuckerberg famously proclaimed that privacy was no longer a “social norm” and that as a society we “have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people.” It seems the young Facebook CEO’s words could not have been more prophetic, as society seems to increasingly accept that the concept of privacy and the right to keep one’s most intimate information private are relics of the pre-digital era. It is a remarkable commentary on the state of society that the world’s reaction to the Cambridge Analytica story was a collective yawn. For all of the table pounding policymakers, protesting press and prognosticating pundits, after a brief flurry of news coverage, life went back to normal. There were no mass defections from Facebook, no noticeable boycotts, no new legislation, no corporate blacklisting, no public renouncements, no corporate bankruptcy. There was simply nothing. Within a month of the Cambridge Analytica story breaking, the story was nearly through and by two months after, it was all over. Worldwide media coverage mentioning both Facebook and “privacy” is back to its pre-Cambridge Analytica level, showing that despite nearly a year of privacy scandals, we’re back to where we were before. Even the media haven’t begun paying more attention to privacy. Most importantly, as Facebook’s earnings show us, the company’s profitability continues to climb and it continues to add users. Rather than the predicted user hemorrhage, Facebook has actually been steadily adding users after a year of privacy scandals. Where does this leave us? Putting all of this together, a year’s worth of scandals has yielded merely silence. That the media and public have moved on tells us that Facebook has succeeded in normalizing the idea of a privacy-free web. It tells us that as Mark Zuckerberg assured us 9 years ago, privacy is truly dead. It tells us that Facebook has become so integral to the very fabric of society that it can survive even the most catastrophic privacy scandals with ease. It tells us that Facebook is now so important to our lives that even an entire year of privacy failures and security breaches and having our data stolen outright from the platform by hackers isn’t enough to make us #DeleteFacebook. It tells us that Facebook’s ability to weather all of this without so much as a single new piece of legislation or economic consequence means it really has no reason to give privacy even the most fleeting of thoughts moving forward. After all, if all of 2018’s catastrophes combined yielded no harm, why should it worry at all about privacy in 2019? Most importantly, it tells us that we simply don’t care about our privacy anymore. In the end, now that Facebook has learned that all of the events of 2018 combined did not yield a single shred of lasting damage beyond a few weeks of bad press, it is likely that the company will move aggressively forward in the coming years with ever more intrusive applications, its hands now fully unshackled from any worries about bad privacy press. Facebook is now so important that we no longer have a choice to leave it or ask it to change its ways. In the end, in retrospect, 2018 wasn’t the Year Privacy Prevailed. It was the Year Privacy Died.
b0a99454773fd516f7c5e4c7601dd150
https://www.forbes.com/sites/kalevleetaru/2019/02/16/social-media-analytics-is-a-disaster-why-cant-we-fix-it/
Social Media Analytics Is A Disaster: Why Can't We Fix It?
Social Media Analytics Is A Disaster: Why Can't We Fix It? It is a truly sobering thought that the totality of our understanding of social media and the insights we draw from it are based on data, algorithms and analytics platforms we know absolutely nothing about. Social media platforms have created a reality distortion field that promotes them as the very definition of “big data” while in reality the entire archive of every link shared on Facebook and every tweet ever sent is vastly smaller than we might ever have imagined. In fact, just a fraction of our daily journalistic output is almost as large as many of the datasets we work with. In turn, the social media analytics platforms we turn to to make sense of social media are black boxes from which we blindly report without ever asking whether any of the trends they give us are real. Is it time to just give up on social media analytics? One of the most dumbfounding aspects of the social media analytics industry is just how little visibility there is into how any of these platforms work. Customers plot volume timelines, chart sentiment, compile author and link histograms, map user clusters, identify influencers and drill into demographics, all without having the slightest insight into whether any of those results are real. Over the past year I’ve had the misfortune of having to use the results of a number of social media analytics platforms and my experiences have been truly eye opening regarding just how bad the social media analytics space has become. Even some of the biggest players make heavy use of sampling, use algorithms that have been widely demonstrated not to work accurately on tweets, apply sampling even in places their documentation states explicitly are not sampled or make incorrect claims regarding the accuracy of the algorithms, data and methodologies they use. Most concerning, few platforms are up front regarding the consequences of their myriad methodological and algorithmic decisions on the findings that their customers draw from their tools. Their slick web interfaces make no mention that results are sampled or that there was a breaking change in a key algorithm that will cause a massive change in results. In some cases, both their interfaces and documentation explicitly state that results are not sampled, but after being confronted with incontrovertible evidence, a platform will quietly acknowledge that they do actually extrapolate results and thus results may be significantly off or even entirely wrong for certain displays. In one analysis I used a major social analytics platform in an attempt to look at the popularity of a specific topic that uses a shared English language hashtag across all languages. The major platform I used makes it trivial to filter by language and assures its users that it uses state of the art language detection algorithms purpose built for Twitter. The resulting linguistic timeline captured a number of fascinating results that were both noteworthy and transformative for how to think about communicating that topic to the public in terms of the languages it was now attracting attention in and the languages that no longer prominently mentioned the topic. Adding confidence to the results was the fact that similar trends had been reported by the data science groups at several companies and governmental agencies I had spoken with previously that had used the same platform for other topics. A random spot-check of a few hundred tweets checked out as well. However, it is worth noting that spot checks are of limited utility for verifying social media trends, since they make it easy to check for false positives, but platforms provide few tools to systematically and statistically validate false negative rates. In other words, it is easy to see whether the returned tweets are not correct but not easy to verify how many tweets that should have been returned were incorrectly missed by the algorithm. I grew concerned when the trend curves reported by the tool did not match other sources of information. For example, a number of the language curves showed a steep decline in tweets about the topic in a particular language just as news and NGO reporting at the time suggested tweeting about the topic in that language was sharply increasing or vice versa. Nowhere in the company’s slick and user-friendly dashboard was there any mention or warning that there had been any data or algorithmic changes. Making matters worse, the languages all had very different trend curves, meaning it wasn’t an obvious case of the company swapping out their language detection algorithm for a different one on one particular date. A rapid search of their documentation and help materials didn’t turn up any obvious notices either. Finally, at long last, after skimming almost every page of their documentation, I stumbled across a brief one-line mention buried deep within their help archives that they had originally assumed that they could just use the language setting in a user’s Twitter app and assign it as the estimated language of all of that user's tweets. After belatedly realizing several years later that this doesn’t work well on Twitter, the company finally decided to use a language detection algorithm. However, due to the high computational requirements of language detection, they decided not to go back and reprocess all of the historical material with their algorithm. Investigating further, it appears that some languages had a better alignment between tweet language and user language setting and thus the switch to algorithmic language detection had less of an impact. However, other languages did not experience massive volume changes until long after the documentation states the company had implemented their language detector. Unsurprisingly, the company was unwilling to offer much detail as to why this might be, other than noting that they had repeatedly upgraded their algorithm over time and that they do not go back and reprocess past tweets when they make algorithmic changes. From an analytic standpoint, knowing that the platform’s language metadata has changed repeatedly in breaking ways and without any documentation of those change points means that for all intents and purposes those filters are not usable for longitudinal analysis because users cannot differentiate between a genuine change and an algorithmic change. This is actually a common characteristic of many platforms. As companies upgrade their algorithms over time, not all companies go back and apply the updated algorithm to their entire historical backfile with the option for users to use either the original results (for backwards compatibility with past analyses) or the results from the new algorithm. Instead, users are left wondering whether any given result is an actual finding or merely an algorithmic artifact. One platform’s tweet mapping capability looked extremely useful to identify geographic clusters of interest in my topic. However, when the resulting maps looked extremely off and I started reviewing tweets it had assigned to each country and city, I realized the company was making a lot of assumptions about the geography of Twitter that my own work back in 2012 showed did not hold for the social network. Looking to the imputed demographics many platforms offer, the results are often comically nonsensical. One platform provided nearly the same demographic breakdown for every search I ran, always returning that the vast majority of Twitter users over the past decade were in their 60’s and that Twitter had almost no users in their 20’s or 30’s. Another platform suggested that Twitter over the last five years has consisted almost exclusively of high schoolers and 40-year-olds, with no-one in between. Gender breakdowns also varied heavily by platform for the same searches, from more than 70% male to more than 70% female. In a total absence of any documentation of how platforms compute all of these demographics and the wildly different estimates provided by different platforms, I ended up ultimately excluding from my analyses everything other than simple counts of how many tweets per day matched each of my searches. Unfortunately, these estimated demographic fields are highly reported insights used by many companies and policymakers to drive real decisions around communications strategy and policy. They might be better served by random guessing. It was a truly eye-opening experience to see just how wildly varied the results from different platforms can be for the same searches. This suggests that much of the insight we receive from social media analytics platforms may depend more on the algorithmic artifacts of that platform than the genuine trends of Twitter behavior. Perhaps most seriously, however, is the way in which many companies base their results on statistical sampling. The entire point of using a professional social media analytics platform is to get away from using the sampled data of the 1% and Decahose streams and to instead search the full firehose directly. However, the trillion-post size of Twitter means that many social media analytics companies, including some of the largest platforms, rely on sampling to generate their results. Many platforms rely on sampling for their geographic, demographic and histogram displays. Some prominently display warnings at the top of sampled results indicating that the results were sampled and reporting the sample size used. Some even permit the user to increase the sample size slightly for more accurate results. It turns out that some platforms use sampling even for their most basic volume timelines. One major product clearly states on its volume timeline that the results represent absolute precise counts and do not use any form of sampling. Clicking on the help tab again notes that no results on that page are sampled in any way. Both the documentation page and help page for the volume display also explicitly state that results represent absolute counts and are not sampled. However, after noticing that adding a Boolean “AND” operator to a query could result in the impossibility of returning higher result counts than without the AND operator (adding an AND operator to a query must return either the same or fewer results as the original query), I reached out the company’s customer service. The company’s technical support specialists kept pointing me back to the documentation that stated that volume counts are not estimated and repeatedly assured me that volume counts represented precise absolute counts and that they are not sampled in any way. After dozens of emails back and forth where I repeatedly asked how I could be seeing these incorrect results if their volume counts were not estimated, I was ultimately escalated to senior management where the company ultimately conceded that it does actually quietly estimate its volume counts in certain cases because it would be too computationally expensive for the company to report precise volume counts. When I asked why the company felt it was acceptable to claim in its user interface, documentation and help pages that results are not estimated, to append notices to their graphs that results are not estimated and to have their customer service staff assure customers that results are not estimated, given that they are in reality estimating results, the company did not have an answer other than to protest that it would require more computing power to report absolute numbers. As a data scientist, the idea that an analytics platform would explicitly state that results represent precise absolute counts, but then secretly use only estimated results, is simply beyond any belief. Yet, it was the company’s response when I raised concerns about this practice that really best summarize the state of social media analytics today. Rather than view its secret use of estimated results as a serious breach of trust or a severe problem for those attempting to compare different searches, the company’s glib response was that this is simply what people expect from social media analytics. That social media datasets are so large that it would be simply impractical and overly costly for any company to offer “real” results. More tellingly, the company argued that users simply don’t care. They noted that the vast majority of their users were marketers and communications staffers who just wanted to create quick reports that showed some basic information regarding how their search is being communicated on social media and thus accuracy isn’t of any importance. After all, no-one makes life or death decisions based on the results they get from a social media platform, so the thinking went. At every turn, rather than view these as extremely serious methodological issues, the companies I spoke with dismissed my concerns outright, arguing that users of social analytics platforms aren’t concerned about accuracy and accept that the results they receive will be haphazard and potentially more wrong than right. That users don’t come to analytics platforms for accuracy, they come for pretty graphs and ease of use, even if those pretty and easy to create graphs don’t bear the most remote resemblance to reality. That users like myself that need to have at least some trust in the accuracy of our graphs should not be using analytics platforms and should instead be working directly with the raw commercial Twitter data streams. If that is the case, what’s the point of using analytics platforms at all? Putting this all together, for all their slick interfaces and hyperbolic marketing materials touting precision analytics, the sad reality is that many of the social media analytics platforms out there today yield results that are questionable at best and outright comical at worst. In the end, instead of focusing on packing every imaginable feature in their systems and believing accuracy comes secondary to speed, for social media analytics platforms to mature, they need to focus more on putting accuracy first, even if that means spending a bit more on their computing infrastructures and reducing the number of features they offer. After all, a pretty graph isn’t worth much if the story it tells is completely wrong.
426363fd906eb783ddaa5e2eb1905dd3
https://www.forbes.com/sites/kalevleetaru/2019/02/17/the-big-data-revolution-will-be-sampled-how-big-data-has-come-to-mean-small-sampled-data/
The Big Data Revolution Will Be Sampled: How 'Big Data' Has Come To Mean 'Small Sampled Data'
The Big Data Revolution Will Be Sampled: How 'Big Data' Has Come To Mean 'Small Sampled Data' One of the great ironies of the “big data” revolution is the way in which so much of the insight we draw from these massive datasets actually comes from small samples not much larger than the datasets we have always used. A social media analysis might begin with a trillion tweets, use a keyword search to reduce that number to a hundred million tweets and then use a random sample of just 1,000 tweets to generate the final result presented to the user. As our datasets get ever larger, the algorithms and computing environments we use to analyze them have not grown accordingly, leaving our results to be less and less representative even as we have more and more data at our fingertips. What does this mean for the future of “big data?” Stepping back from all of the hype and hyperbole, there is considerable truth to the statement that we live in an era in which data is valued sufficiently that we believe it worth the time and expense to collect, store and analyze it at scales that significantly exceed those of the past. It is just as true that many of our beliefs regarding the size of the datasets we use are absolutely wrong. In particular, many of the vanguards of the big data era that we hold up as benchmarks of just what it means to work with “big data” like Facebook and Twitter, are actually vastly smaller than we have been led to believe. In many ways, much of the size of the “big data” revolution exists only in our imaginations, aided by the reality distortion field of the big web companies that tout their enormous scale without actually releasing the hard numbers that might lead us to call those claims into question. Most troublesome of all, however, is the way in which we analyze the data we have. Computing power continues to increase and in today’s world of GPUs, TPUs, FPGA’s and all other manner of accelerators, we have no shortage of hardware on which to run our analyses. The problem is that despite nearly unfathomable amounts of hardware humming away in the data centers of the big cloud companies, we are still just as hardware constrained as we have always been. We may have vast amounts of hardware, but the datasets we wish to analyze are even larger. It is a truth of the modern computing era that it is far cheaper to collect and store data than it is to analyze it. In many ways this is an ironic reversal from even just a decade ago in which many large scientific computing codes would recompute intermediate results from scratch rather than store them to disk due to it being faster to compute the result again than load it from a large direct attached HPC storage fabric. Where CPU once outpaced disk, today it is the opposite. Storage is now so cheap that keeping multiple copies of a petabyte across multiple geographically distributed datacenters for maximum redundancy costs less than $10,000 a month in the cloud. Storing a single copy in an on-premises JBOD costs less than $25,000 worth of NAS-grade drives and prices continue to fall. The ratio between the size of our data and the computing power we have to process that data isn’t remarkably changed. My IBM PS/2 Model 55SX desktop in 1990 had a 16Mhz CPU, 2MB of RAM and a 30MB hard drive. That’s roughly one Hz of processor power for every 1.87 bytes of hard drive space. Fast forward almost 30 years to 2019. A typical commercial cloud VM “core” is a hyperthread of a 2.5Ghz to 3.5Ghz (turbo) physical processor core with 2GB-14GB RAM per core and effectively infinite storage via cloud storage, though local disk might be limited to 64TB or less. Using the same CPU-to-disk ratio as 1990, that 3.5Ghz turbo hyperthread would be paired to 6.6GB of disk (actually, 3.3GB of disk, taking into account the fact that a hyperthread is really only half of a physical core). Such a comparison isn’t quite fair given the capabilities of newer CPUs and the vastly improved speeds of modern disk systems, but it is worth pointing out just how far the ratio between CPU power and disk storage has slipped over the past three decades. Even when it comes to data transfer rates, little has changed in those 30 years. The disk transfer rate on the 1989 IBM PS/2 Model 55SX was rated at 7.5MBPS (around 0.94MB/s). Dividing the CPU’s 16Mhz clock speed by 0.94MB/s works out to around 17 bytes/s per Hz. Fast forward today and a typical cloud VM might max out at around 180MB/s maximum sustained read from standard persistent disk. With a turbo clock speed of 3.5Ghz hyperthread / 180MB/s that works out to around 19 bytes/s per Hz, roughly the same disk transfer rate per Hz almost 30 years later. Of course, there are many other factors that influence the speed with which data from disk can actually be utilized by a CPU in practice, but no matter how you slice it, in terms of the ratio between the performance of our storage systems and the performance of our CPU systems, we haven’t really undergone a transformative shift in the decades since I sat at my first desktop. Moreover, the virtualization of the modern cloud enacts a steep cost that slows it even further. Modern clouds address this differential through data parallelism, sharding data across multiple virtual cores. Yet, to maintain the same ratio of CPU to disk to process a petabyte with 3.5Ghz virtual cores in the modern cloud would require a cluster of 152,381 VM cores, assuming perfect linear scalability. A 10PB dataset would require 1.5 million VM cores. In reality, communications overhead and the limits of hardware scaling mean the number of cores required to achieve theoretical scaling results would be considerably higher. Amazingly, the CPU cost for all those cores would only be a few hundred dollars to a few thousand dollars per hour of analysis and a few thousand dollars to store the petabyte for a week (not counting the time it takes to upload or ship). Moreover, platforms like Google’s cloud make it trivial to scale across large datasets, with BigQuery able to table scan a full petabyte in under 3.7 minutes with one line of SQL, no programming or manual data sharding required. We obviously aren’t lacking for the hardware to process petabytes or even tens or hundreds of petabytes. In fact, BigQuery even offers fixed-rate pricing starting at $40,000 per 2,000 slots that allows companies to perform unlimited queries over unlimited volumes of data for the same fixed monthly cost. A company’s data science division could query multi-petabyte datasets non-stop 24 hours a day for the same fixed cost, removing cost as a limiting factor in performing absolute population-scale analyses. The problem is that when it comes to big data analyses there seems to be a tremendous gulf between the companies performing population-scale analyses using tools like BigQuery that analyze the totality of their datasets and return absolute results and the rest of the “big data” world in which estimations and random samples seem to dominate. Estimations are particularly prevalent in spaces like social media and behavioral analysis. In a world in which all it takes is a single line of SQL to analyze tens of petabytes with absolute accuracy, why is estimation so popular? Partially the answer is our preoccupation with speed over accuracy. Why wait minutes for petascale analyses when you can wait seconds for a random sample that may or may not bear even the slightest resemblance to reality? In turn, our ability to tolerate immense error in our “big data” results often comes from the fact that the consequences for bad results are minimal in many “big data” domains. An analysis of the most influential Twitter users by month for a given topic over the last few years doesn’t really have a “right” answer against which an estimation can be compared. Moreover, even if the results are entirely wrong, the consequences are minimal. Partially the answer is that our more complex algorithms have not kept pace with the size of the datasets we work with today. Many of the analytic algorithms of greatest interest to data scientists were born in the era of small data and have yet to be modernized for the size of data we wish to apply them to today. Few mapping platforms can perform spatial clustering on billions of points, while graph layout algorithms struggle to scale beyond millions of edges. In short, the volume of data available to us has outpaced the ability of our algorithms to make sense of it. In the past we had low volumes of very rich data. Today we have high volumes of very poor data, meaning we have to process much more data to achieve the same results. When our algorithms aren’t scalable, we are left processing the same amount of data as before, but that sample is less and less representative of the whole. A classical SNA graph analysis of the past might involve a network with a few thousand or tens of thousands of edges, which could be visualized in its entirety even by the software of the day. Today we can easily obtain graphs in the billions of edges, but our most common graph visualization tools struggle to produce useable results in reasonable time with anything more than a few hundred thousand edges. That means that while a visualization of the past might have reflected the entire graph, today that graph is more likely to represent just a minuscule fraction of the full dataset. A 10,000-edge graph visualized in Gephi shows the totality of its structure. A 10 billion-edge graph sampled down to 10,000 edges represents just 0.0001% of the total graph. It isn’t just a question of speed. Many of our most heavily used algorithms were never designed to work with large datasets even when speed is not of concern. Few graph layout algorithms can do much to extract the structure of a dense trillion-edge graph even if given limitless computing power and as much time as they need to complete. In short, we are confronted with the paradox that the more data we have, the less representative our findings are due to the need for sampling. Platforms like BigQuery are slowly changing the algorithmic side of that equation. As they move beyond their reporting roots towards providing high level analyses like geospatial analytics, they are beginning to externalize the kind of massive algorithmic scalability that Google uses internally to bring more and more algorithms into the scalable cloud era. As these trends progress, algorithmic scalability will steadily become less of a limiting factor for common use cases. That leaves the mindset factor. We need to move beyond the idea that it is acceptable to perform the “sampling in secret” used by some social media analytics platforms in which certain displays quietly rely on sampling even while they tell their users they perform absolute counts. Sampling can be useful in certain circumstances. However, we need to understand precisely when and where it is used, the size of the sample and how that sample was selected. Understanding that a visualization was based on a random sample of just 1,000 out of 100 million matching tweets might give pause to how representative its results truly are. Moreover, if that “random sample” is actually based on randomly selecting 100 dates and taking the first 10 tweets from each date in chronological order, that may result in very different findings than sampling at the tweet level. Having visibility into these methodological decisions is absolutely critical. Perhaps the biggest issue is that we need to stop treating “big data” as a marketing gimmick. As companies have begun to market themselves in terms of the size of the datasets they hold and analyze, we’ve let go of the idea of actually understanding anything about the data and algorithms we’re using. Even the most rigorous data scientists freely accept the idea of reporting trends from Twitter without having any idea just what the full trillion-tweet archive from which they are working actually looks like. In fact, few data scientists that work with Twitter even know there have been just over one trillion tweets sent. Understanding your denominator used to be something that was considered sacrosanct to data science. Somehow, we’ve reached a point where we just accept reporting findings from analyses where we no longer have a denominator. Once we start treating “big data” analysis as a methodologically, algorithmically and statistically rigorous process based on well understood data, we recognize the need for population-scale analyses with guaranteed correctness. We turn to platforms like BigQuery and its ilk to run our analyses and focus on accuracy and completeness. We modernize the algorithms we need. We relentlessly test our results. Most importantly, we restore the denominator to our workflows. Putting this all together, it is ironic that in a world drowning in data, we have increasingly turned to sampling to render our “big data” small again. As our datasets have grown in size, we have sampled them ever more aggressively to keep the actual amount of data fed to our analyses and algorithms roughly the same. Whereas in years past our analyses might have been based on the totality of a dataset, today that same analysis might consider just one ten thousandth of one percent of that dataset. As we have ever more data, our results are becoming ever less representative. In short, our vaunted “big data” revolution has actually resulted in less understanding of our world through less representative data than ever before. In the end, we need to stop treating “big data” as a marketing slogan and bring ourselves back to the era where we actually cared about the quality of our data analyses before we undermine the public's trust in the power of data.
9952718a0e4d583a53ec5920cbdb9550
https://www.forbes.com/sites/kalevleetaru/2019/02/24/why-data-visualization-is-equal-parts-data-art-and-data-science/
Why Data Visualization Is Equal Parts Data Art And Data Science
Why Data Visualization Is Equal Parts Data Art And Data Science One of the most powerful ways through which we convey the results of data science is visualization, from simple Excel graphs through advanced displays like network diagrams and bespoke visuals. What most outside the data science community don’t realize is just how much artistry is involved in the creation of some of those visualizations, from the impact of color schemes on perception in geographic mapping to the layout algorithms and data filtering used in network visualizations. Given the rising use of networks to understand everything from social media to semantic graphs, just how much of an impact do our layout algorithms and filtering decisions have on the final images we see? Network visualizations are at once beautiful and informative, helping us make sense of the macro through micro patterns in the vast connected ecosystems that define the world around us. Yet, like any form of data visualization, network visualization does not capture the sum total reality of our data so much as it constructs one possible reality. When we think of scientific visualization, we think that the images we see present to us the one single “truth” of a dataset, without realizing that any given dataset can tell many different stories depending on the questions we ask of it and the filters we apply to answer those questions. The myriad possible filters we apply to a graph to reduce its dimensionality, the layout algorithm that places the nodes in space, the clustering algorithms like modularity that group nodes by “similarity,” the definition of “similarity” that we hand to those clustering algorithms, the node sizing algorithms like PageRank, the color scheme and the random seeds used by many algorithms that ensure each run yields a very different image: all of these conspire to ensure that a single dataset can yield a nearly infinite number of possible visualizations. How does this process play out in a real world visualization task? In April 2016 my open data GDELT Project began recording the list of hyperlinks found in the body of each worldwide online news article it monitors. Not all news articles contain links, but many link to external websites such as the homepages of organizations being mentioned in the article or other news outlets from which specific story elements were sourced. These external sources of information provide powerful insights into which websites each news outlet considers worthy of mentioning, in much the same way that the references in an academic paper offer insights into the works believed most relevant and reputable by each field. As of last month, GDELT’s link database had recorded more than 1.78 billion outlinks from more than 304 million articles. Collapsing each URL to its root domain and connecting each news outlet with the unique list of external domains it has linked to over the last three years and the number of days it published at least one article linking to that domain, the final dataset is composed of just over 30 million distinct pairings of news outlets and external websites, including links to other news outlets. This link dataset is a classic network graph that can be readily visualized using off the shelf visualization packages like the open source Gephi. However, its size and density mean that the graph must be filtered to reduce it down to a specific subset of greatest methodological interest, while the edge count must be lowered to reduce the graph to its most “significant” edges. Instead of focusing on which sites a given news outlet links to, a far more interesting question in light of the current interest in combating “fake news” is to restrict the analysis to only links between news outlets and to compile a list of the top news outlets that link to a given other news outlet. In other words, for a news outlet like CNN, what are the top other news outlets around the world that link most heavily to CNN as a source in their own reporting? Much like academic citation networks convey authority, the linking behavior of news outlets can similarly convey a proxy of “news authoritativeness.” Thus, the 30 million edge graph was methodologically inverted and only edges connecting news outlets were retained. A link from a CNN article to a UN report would be discarded, but a link from a CNN article to a New York Times article would be preserved. As an initial visualization, the top 30 news outlets linking to each news outlet on at least 30 days or more were extracted and a random subset of 10,000 edges were used to form a new graph. The nodes were positioned using the OpenOrd layout algorithm and colored by Blondel et al's modularity, with coloration selected by Gephi’s built-in palette generator. The final image is seen below. GDELT GKG 2016-2018 Outlink Graph (Top 30 / Random 10,000) White Background Kalev Leetaru OpenOrd, like many layout algorithms, utilizes random seeds meaning it will yield a slightly different result each time it is run. This is an important distinction that is lost to many unfamiliar with network visualizations: there is no single “truth” to the visualized structure of a graph. Every rendering of a graph will present it in a slightly different way. Researchers frequently run a layout algorithm multiple times until they find a presentation that either looks the "best" or does the best job of visually separating the clusters of greatest importance to their analysis. What happens if we change the background color from white to black, while otherwise leaving the graph exactly as-is? GDELT GKG 2016-2018 Outlink Graph (Top 30 / Random 10,000) Black Background Kalev Leetaru Despite the graph being exactly the same, the darker background subtly changes our perception of the graph, making the spaces between nodes clearer and drawing a sharper contrast between clusters. Our eye is also drawn more naturally to the graph's diffuse structure. As the two images make clear, even the selection of the background color of a graph can have an impact on our perception of it. What if we adjust the thickness of each connection based on edge strength? In other words, the line between two outlets that were linked on 200 different days would be thicker than the line between two outlets linked on just the minimum 30 days. GDELT GKG 2016-2018 Outlink Graph (Top 30 / Random 10,000) With Edge Weighting Kalev Leetaru Rather than a diffuse mess of lines we begin to see macro level structure. The lower center orange cluster becomes especially vivid. What if we instead filter the graph to retain the top 30 news outlets linking to each outlet where the linked-to domain is also in the top 30 list of each of those inlinking domains? In other words, filtering to each outlet's top 30 reciprocal edges. In addition, instead of displaying 10,000 randomly selected edges, the top 30,000 strongest edges are retained. GDELT GKG 2016-2018 Outlink Graph (Top 30 Reciprocal / Top 30,000) Kalev Leetaru This graph shows a far more centralized structure, with a center core of tightly connected outlets around which the rest of the media ecosystem revolves. This paints a very different picture of our global media structure, from the earlier diffuse dense collective to a galaxy-like mass of small clusters orbiting around a central core of international stature outlets. Much of this comes from our use of strongest edges rather than random edges, reminding us of the critical impact our sampling decisions have on the final structure we see. Adjusting the thickness of each edge based on edge strength makes the central core less prominent and instead emphasizes the isolated nature of the myriad smaller clusters around the periphery. GDELT GKG 2016-2018 Outlink Graph (Top 30 Reciprocal / Top 30,000) With Edge Weighting Kalev Leetaru How much of an impact does the layout algorithm have on our understanding of the structure of a graph? Here we reduce the graph to the top five inlink outlets by news outlet and display the top 50,000 strongest connections using the same OpenOrd algorithm used in all of the above graphs. GDELT GKG 2016-2018 Outlink Graph (Top 5 / Top 50,000) Using OpenOrd Kalev Leetaru The result is a very diffuse structure like the earlier renderings, showing complex structure with multiple cores, a complex interconnected structure on the left and numerous other clusters. In contrast, the image below shows the results of running the exact same graph through the Force Atlas 2 algorithm instead. This image looks like an entirely different graph, with the entire network extending outwards from a central core. GDELT GKG 2016-2018 Outlink Graph (Top 5 / Top 50,000) Using Force Atlas 2 Kalev Leetaru Here’s another comparison, this time limiting to just those news outlets indexed in Google News circa mid-2017 and similarly limiting to the top 5 inlink domains by outlet and displaying the top 50,000 strongest connections. The OpenOrd layout shows a diffuse structure. GDELT GKG 2016-2018 Outlink Graph (Google News / Top 5 / Top 50,000) Using OpenOrd Kalev Leetaru The Force Atlas 2 layout, on the other hand, once again centralizes the graph structure. GDELT GKG 2016-2018 Outlink Graph (Google News / Top 5 / Top 50,000) Using Force Atlas 2 Kalev Leetaru Sometimes the centralized perspective of Force Atlas 2 can be helpful in drawing attention to the centralized clustering of a graph. Here the same graph as above, reduced from the top 50,000 strongest connections down to the top 10,000. The OpenOrd layout predictably shows a fairly diffuse layout, though helpfully captures the graph’s dual center. GDELT GKG 2016-2018 Outlink Graph (Google News / Top 5 / Top 10,000) Using OpenOrd Kalev Leetaru The Force Atlas 2 version collapses this center but makes it more apparent that the entire graph revolves around a complex center. GDELT GKG 2016-2018 Outlink Graph (Google News / Top 5 / Top 10,000) Using Force Atlas 2 Kalev Leetaru Graphs are most commonly displayed as edge visualizations in which the connections between nodes are the focal point of the image. This tends to be the norm in domains where it is the connectivity structure that is of greatest interest. In other domains it is the nodes themselves that are of primary importance, with the graph structure used only to position them in space according to their relatedness. The image below shows a traditional OpenOrd layout of the graph of news outlets based in the United States, using their top 10 reciprocal edges and limiting to the top 30,000 strongest edges. GDELT GKG 2016-2018 Outlink Graph (US News Outlets / Top 10 Reciprocal / Top 30,000) Using Edge... [+] Weights And OpenOrd With Edges Kalev Leetaru The image below shows the exact same graph, but with the edges hidden to show only the nodes. GDELT GKG 2016-2018 Outlink Graph (US News Outlets / Top 10 Reciprocal / Top 30,000) Using Edge... [+] Weights And OpenOrd With Nodes Only Kalev Leetaru This actually makes many of the peripheral clusters clearer and draws the macro structure of the graph into starker focus. For high density graphs like this, node visualizations can often make it easier to understand graph structure without the burden of tens of thousands of distracting spaghetti lines crisscrossing the image. Putting this all together, we see just how much of an impact our algorithmic and methodological decisions have on the final visual representation we receive of a given dataset. Every visualization on this page displays the very same dataset but offers different perspectives by filtering it in different ways and using different algorithmic and visual selections. In the end, perhaps the biggest takeaway is the reminder that the incredible imagery that emerges from our vast datasets are equal parts data art and data science, constructing rather than reflecting reality.
7bf547f7781281c9200551cebd069431
https://www.forbes.com/sites/kalevleetaru/2019/02/27/whatever-happened-to-the-denominator-why-we-need-to-normalize-social-media/
Whatever Happened To The Denominator? Why We Need To Normalize Social Media
Whatever Happened To The Denominator? Why We Need To Normalize Social Media Percent of global warming and climate change tweets that were not retweets 2009-present Kalev Leetaru One of the most important but least told stories of the “big data” revolution is the way in which the denominator has all but vanished from data science. Today we tally tweets, inventory Instagram posts and survey searches, reporting them all as raw volume counts without any understanding of whether the trends they depict are real or merely statistical artifacts of opaque datasets. We no longer even know how big our datasets are nor how fast or slowly they are growing. The companies behind those datasets refuse to offer even the most basic of details to help us add a denominator back into our equations to normalize and contextualizing our findings. In short, we have traded more data for less understanding of what is in that data. How might this complicate our understanding? One of the most interesting findings of yesterday’s analysis of the speed of Twitter, beyond the fact that Twitter doesn’t actually regularly “beat the news,” was just how much of the Twitter activity about both scheduled and unexpected news were retweets. Whether a preplanned press conference heavily advertised in advance or an out-of-the-blue natural disaster, the vast majority of the signal received from Twitter was in the form of retweeting other people’s posts. Rather than on-the-ground witnesses and participants live chronicling what they are experiencing in realtime, it seems that at least when it comes to government press conferences and natural disasters, Twitter is about forwarding from afar. In other words, Twitter is a behavioral dataset, filled with the equivalent of “likes” and “forwards” rather than a content-based platform filled with new perspectives, contexts and details to help us understand breaking events from those involved. This is a huge finding that calls into question the utility of Twitter as a data source for understanding the very kinds of breaking news events it has been most promoted for. Implicit in this argument is the idea that heavy retweeting is unique to breaking news events. What if this instead merely reflected a broader Twitter-wide trend towards retweeting? If the totality of Twitter was shifting from novel commentary to mere retweeting, this finding would no longer suggest that the way in which we are handling breaking news is changing. In other words, while its findings would hold, its significance as something distinct and special to how we understand breaking news would change. One way of examining this would be to look over a longer period of time at a societally polarizing topic that involves both breaking events and steady background conversation. If heavy retweeting is distinct to breaking news events, we should see a fairly steady horizontal line with sharp temporary surges around major breaking climatic events, but otherwise nearly flat. If the high level of retweeting instead reflected a Twitter-wide trend towards less and less original content, we would expect to see a steady downward trendline in the percent of non-retweets over the years. The timeline below shows the final results, plotting the percent of global warming and climate change tweets that were not retweets, from July 2009 to present, using data from Crimson Hexagon. Percent of global warming and climate change tweets that were not retweets 2009-present Kalev Leetaru From nearly 90% of climatic tweets that were original content in July 2009 to less than 25% today, we can see that at least for these two topics, Twitter has trended steadily and surely over the past decade from a content service into a retweeting service. In fact, plotting a wide range of different topics we see exactly the same curve for every one of them, suggesting this trend is not distinct to climatic or even scientific content, but rather something more existential to changing trends in how we use Twitter. Comparing the trendlines above to Twitter as a whole, we find they are correlated at r=0.90 and using a 7-day rolling average to smooth the data that correlation rises to r=0.96. In short, that significant finding about breaking news being defined by retweets is not a finding about breaking news at all, but rather merely a reflection of Twitter as a whole. What about the heavy linking activity found in climatic tweets? For both press conferences and breaking news events, a large percentage of the tweets included URLs, either links to external websites or embedded images. Looking longitudinally at the percent of global warming and climate change tweets that included a URL 2009-present, we see that this time there are two distinct trendlines that emerge: the relatively steady 80% linking of climate change and the sharp decrease/increase/steady curve of global warming. Looking closely, climate change appears to curve slightly at the same time as global warming, but not nearly as significantly. The two trendlines are not highly correlated, at just r=0.36. Percent of global warming and climate change tweets that included a URL 2009-present Kalev Leetaru With two very different trendlines, which is the most significant finding? From a visual and statistical standpoint, it might seem that the greater variance of the global warming trend would be the most interesting, since it has changed significantly over time, whereas climate change has remained relatively unchanged. Indeed, based purely on comparing these two graphs, most data analyses would likely select the high variance of global warming linking activity as the most significant finding. However, truly determining which trendline is the most significant requires normalization: comparing both against the underlying trends of the dataset as a whole to see which deviates furthest from Twitter-wide behavior. In this case, the steady state of climate change link tweets is correlated at just r=0.22 with Twitter as a whole, while global warming tweets are correlated at r=0.70. Thus, from a significance standpoint, global warming’s ebbing and flowing merely mirrors the broader evolution of how people have used Twitter for sharing links, whereas climate change’s steady flow of links has deviated significantly from the rest of Twitter, suggesting there is something distinct about that community. Putting this all together, we find that without normalization, it is impossible to tell which of the findings we derive from social media are significant and which merely reflect the background changes of the platforms we are measuring. For their part, the companies themselves don’t tout normalization. When asked about changes in tweeting and retweeting behavior, two different Twitter spokespersons declined to comment and one also declined to comment on why the company doesn't release such numbers. The result is a strange world in which we apply highly sophisticated algorithms to large volumes of data without any understanding of whether the results we receive are actually meaningful in any way to the questions we asked. We tout the accuracy and precision of our algorithms without acknowledging that all of that accuracy is for naught when we can’t distinguish what is real and what is an artifact of our opaque source data. It is true that retweeting and link forwarding are defining characteristics of how Twitter users cover climatic press conferences and breaking news events. Yet, the significance of that finding is muddied by the fact that this is simply how Twitter as a whole is used, rather than a statement about breaking news. In many ways this is no different than a finding that climate-related tweets are rare in areas of the world where there is no electricity and no connectivity. While technically a true statement, the meaningfulness of that finding is decreased by the fact that this is a reflection of Twitter as a whole, rather than something specific to climatic change. Looking across all of the social media analyses performed each day, it is the rare analysis indeed that actually normalizes its findings to determine their significance. Instead, the idea of denominators and normalization has been relegated to a quaint memory lost to the foggy mists of history. Social media is not alone in this regard. Media analyses have been plagued for decades by their failure to normalize, which has meant that many of their conclusions have been wrong. In the end, rather than increase our accuracy, the big data revolution has led us to trade accuracy for size. It is truly remarkable for a field so steeped in statistics that we see nothing wrong with reporting raw counts from opaque black boxes that are changing right before our eyes in entirely unknown ways. Perhaps it is finally time to restore the denominator to data science.
0be74ae42bfbbf37765696aa46dd5bf8
https://www.forbes.com/sites/kalevleetaru/2019/03/04/visualizing-seven-years-of-twitters-evolution-2012-2018/
Visualizing Seven Years Of Twitter's Evolution: 2012-2018
Visualizing Seven Years Of Twitter's Evolution: 2012-2018 All geotagged tweets in Twitter's 1% stream January 2012 to October 2018 colored by the most common... [+] language of tweets sent from that location Kalev Leetaru In 2012 I published one of the first in-depth explorations of the geography of Twitter, examining where it was that we were talking from and about in Twitter’s early days of explosive growth. At the time Twitter was spreading rapidly across the world and was highly correlated with historical NASA Night Lights imagery, leading to my conclusion that “where there is electricity there is Twitter.” I followed that in 2015 with a look back at three years of Twitter as its growth had leveled off, it was entrenching rather than expanding and the number of daily tweets and tweeting users had remained constant since July 2013. Four years later it is time to look back now over seven years of Twitter’s growth to explore the evolution of one of the world’s most influential social networks. During its heyday of exponential growth, Twitter used to provide detailed growth statistics at regular intervals chronicling its latest milestones of tweet counts and unique users per day and the latest geographies and languages it had added. The company largely halted these updates as usage leveled off and the platform stopped expanding, leaving an information vacuum around even the most basic statistics like how many tweets per day are sent or how many of those tweets are retweets, include geotags or contain links. Since its growth leveled off, Twitter has provided few insights into the state of its platform. While a spokesperson did confirm that geotagged tweets have consistently averaged around 1-2% of total tweet volume since 2012, the company declined to comment when asked about broader platform metrics like how many total tweets had been sent or how retweeting behavior was changing. It seems that any question that might shed light into Twitter’s growth trajectory is met with silence. Thankfully, Twitter makes a free realtime stream of 1% of all tweets available, known as its Spritzer or Sample stream. This stream is nearly perfectly correlated with the full firehose, making it an ideal proxy through which to study the longitudinal evolution of the platform. To explore Twitter’s development over the last seven years, the Twitter 1% stream from January 1, 2012 through October 31, 2018 was examined. Due to technical issues on the monitoring side, some days have incomplete data and would otherwise skew the results. To remove them, the total number of tweets sent each day was compared to the average of the two days before and two days after. Any day that had a tweet volume 10% higher or lower than the surrounding average was dropped from analysis. This yielded a final 1% dataset that is nearly perfectly correlated with the daily volume of the firehose at r=0.987 and has remained at nearly a perfect 1% sample over the entire seven years. After removal of partial days there were around 7.9 billion tweets in the sample, with around 437 million distinct tweeting users, of which around 0.07% of users were “verified” accounts. Twitter exhibits a strong bias towards a small number of users accounting for a disproportionate amount of total tweet volume. The top 1% most prolific users sent 28% of all tweets in the sample during the period, with the top 5% accounting for 57% of tweets, the top 10% accounting for 71% of tweets and the top 15% of users accounting for 80% of all tweets. These ratios do not appear to have changed appreciably over the seven years, suggesting they are defining characteristics of such a digital public forum in which a small central set of elites drive the overall conversation. The timeline below shows the daily tweet volume of the 1% sample multiplied by 100 to reflect the estimated total daily tweet volume of Twitter 2012-2018. Here we can see the explosive growth period of my original 2012 study, the steady state period of my 2015 study, a sharp decrease in late 2015 and a slow steady decline through the end of October 2018. The two gaps in late 2014 and early 2015 represent periods where technical issues on the monitoring side precluded monitoring the complete 1% stream and to avoid skewing the results with partial data these were filtered out during the outlier filtering above. Estimated total daily volume of tweets projected from Twitter’s 1% stream Kalev Leetaru By the end of October 2018, the total daily tweet volume of the 1% stream was roughly the same as it was in June 2012, more than six years prior. Projecting from these trends we can reasonably assume that around 1.1 to 1.2 trillion tweets have been sent since the service’s founding in 2006. As Twitter volume has edged downward, has the active user base decreased with it? A lower volume of tweets could suggest that there are the same number of users as always, but they are simply saying less than they used to. Alternatively, Twitter users could be saying just as much as they always have, but there could simply be fewer users tweeting each day. Dividing the number of daily tweets by the number of daily distinct users sending those tweets in the 1% stream yields a ratio that has remained fairly consistent over time, suggesting that the platform is not further centralizing towards a small number of power users and instead is simply being used less overall. Indeed, daily tweet volume and the number of distinct users tweeting in the 1% stream is correlated at r=0.96. Avg tweets/user in the Twitter 1% stream Kalev Leetaru Dividing the estimated daily firehose volume from above by the daily tweet/user ratio above, we get the following estimate of the total number of distinct users in Twitter as a whole sending one or more tweets per day. Estimated distinct Twitter users sending one or more tweets per day projected from Twitter’s 1%... [+] stream Kalev Leetaru These numbers are significantly lower than Twitter’s official estimates of “monthly active users.” The reason is that the timeline above estimates only users who actually published one or more tweets each day, whereas Twitter counts active users as “users who logged in or were otherwise authenticated and accessed Twitter through our website, mobile website, desktop or mobile applications, SMS or registered third-party applications or websites.” In other words, Twitter counts all users who log into Twitter, whereas the graph above estimates users who actually contribute to the platform by posting public tweets. In information science parlance we might say that Twitter counts both contributors and lurkers, while the graph above counts only contributors. While lurkers are important from the standpoint of monetization, since they can receive ads, it is contributors that are the fuel of a platform, generating the content that makes people return for more. What about verified users? Some social networks elsewhere in the world have experienced a centralization effect as they aged, where the platforms increasingly revolved around a set of elite users posting and the rest of the platform reacting or sharing those posts, rather than publishing their own content. The timeline below shows the percent of daily tweets in the 1% stream and the percent of daily distinct tweeting users in the 1% stream that were verified. The percentage appears to have largely leveled off and has barely risen above half a percent over the last seven years. Looking more closely, the impact of Twitter’s November 2017 pause to its verification program can be clearly seen. Percent of verified tweets and distinct verified Twitter users sending one or more tweets per day in... [+] Twitter’s 1% stream Kalev Leetaru One limitation of the graph above is that it reflects only the percentage of tweets sent by verified users themselves. In contrast, the behavior seen in some other platforms is that the daily output of elite users remains the same, but the rest of the user community increasingly shifts to merely sharing those elite posts, rather than posting their own original content. To explore the degree to which this is occurring in Twitter, the timeline below of the 1% stream combines tweets sent by verified users with retweets of verified users’ tweets sent by ordinary users. While the graph above only included tweets sent directly by verified users, the graph below captures the far more common scenario: a verified user’s tweet going viral and being retweeted by the rest of Twitter. Percent of all tweets in Twitter 1% stream that were either a tweet by a verified user or a retweet... [+] or a verified user’s tweet Kalev Leetaru Immediately clear from this timeline is that Twitter is following in the footsteps of previous social networks in gradually centralizing around a core of elites whose views are then amplified and forwarded by the masses. From less than 1% of Twitter seven years ago to almost 11% of total tweet volume today, verified users are rapidly taking over the social conversation, though there appears to have been a slowdown of late. Of course, these graphs only examine verified users. It is likely that an even greater fraction of Twitter volume consists of non-verified but high-follower-count users being similarly retweeted. This raises the question of whether Twitter is becoming less of a place to share original content and more of a place to merely retweet the content of others. In other words, is Twitter transitioning from a content platform to a behavioral platform? The timeline below would appear to support this hypothesis. It shows the percentage of all daily tweets in the 1% stream that are retweets. From around 20% of Twitter consisting of retweets in January 2012 to around 53% by October 2018, retweeting has increased nearly linearly over time. After removing hyperlinks, user mentions and the capitalized phrase “RT” by itself with a space after it, there were only around 5.4 billion unique tweet messages in the 1% stream over the seven years. In short, only around 65% of the total 1% stream tweet volume over this period was actual unique mineable textual contents. For social analytics companies, this means they can achieve a significant computational savings by hashing the contents of each tweet and only processing new tweets. Percent of all tweets in Twitter 1% stream that were retweets Kalev Leetaru The steady rise in retweets could come from either a small number of users sending a large number of retweets while others send fewer tweets or all users transitioning from original content to retweets. The timeline below shows the percentage of unique tweeting users in the 1% stream that sent at least one retweet each day. The graph’s near identical trajectory to that of overall retweeting suggests this is a Twitter-wide behavioral change. Percent of all unique users in Twitter 1% stream each day that sent at least one retweet Kalev Leetaru This rapid transition from original commentary towards simple retweeting has profound implications for how we think about Twitter as a data source. Rather than as a source of first person commentary on the events of the moment, Twitter is increasingly becoming a place where users share the commentary of others. Discussions on Twitter can also take the form of replies in which one user explicitly responds directly to the tweet of another user. Replies have also trended downwards in the 1% stream, from around 26% in January 2012 to a low of 14% in May 2017, but has trended slowly back up in year since, to around 18% by October 2018 (using the reply information contained in the tweet JSON record). Percent of all tweets in Twitter 1% stream that were replies Kalev Leetaru As a percent of distinct daily tweeting users in the 1% stream, replying users are largely the same as total reply volume, once again suggesting this is a broad behavioral shift on the platform. Percent of all unique users in Twitter 1% stream that sent at least one reply Kalev Leetaru In addition to retweets and replies, tweets can also simply mention another user. In all, just over 263 million distinct users were mentioned by name in the 1% stream over the seven years. The top 15 users mentioned most often are seen below, with most being celebrities. YouTube 29762560 BTS_twt 20393700 justinbieber 10771692 NiallOfficial 7371952 realDonaldTrump 6707246 Harry_Styles 6378236 Real_Liam_Payne 4836841 weareoneEXO 4133300 Louis_Tomlinson 4013812 zaynmalik 3435909 ArianaGrande 3278748 Luke5SOS 3046707 camerondallas 2839582 onedirection 2839545 Gurmeetramrahim 2787755 Tweets that mention other users have been steadily increasing over the years in the 1% stream, from around 57% of tweets seven years ago to around 72% today. Percent of tweets and distinct tweeting users in the Twitter 1% stream that mentioned another user Kalev Leetaru One notable feature of the graph above is that the density of user-mentioning tweets remains fairly steady in the first quarter of the timeline during Twitter’s growth period. As Twitter’s tweet volume stops growing and levels off, user mentions increase. As Twitter’s daily tweet volume begins to decrease, the percent of tweets mentioning other users begins to increase rapidly and steadily. This suggests that, like with retweets, Twitter is becoming a place to talk to and about others, rather than offer one’s own reflections about the world. What about the connection between Twitter and the outside world? A core finding of Twitter’s evolution has been the changing number of hyperlinks shared across the service. From 2009 through late 2011 link sharing decreased rapidly Twitter-wide, but increased beginning in 2012 and leveled off in mid-2016, before decreasing again beginning in late 2017. The 2012-2018 portion of this evolution can be seen in the graph below, showing the percent of all tweets in the 1% stream that contained a URL. Percent of tweets in the Twitter 1% stream that contained a URL Kalev Leetaru The rapid transition of Twitter links from HTTP to HTTPS can be seen in the graph below, which shows the percentage of URL-containing tweets in the 1% stream that contained at least one HTTP URL. Initially the transition to HTTPS was gradual, picking up speed in early 2013. In April 2015 the site began rapidly progressing away from HTTP. Prior to October 19, 2015 the site was averaging around 84% of URL-containing tweets having an HTTP link. On October 19 that percent dropped to 69%, on October 20th it dropped to 12% and by the 21st it had dropped to just 8%. Percent of URL-containing tweets in the Twitter 1% stream that contained at least one HTTP link Kalev Leetaru Over the course of just 48 hours Twitter transitioned its entire platform to HTTPS links in tweets, showing the power of centralized platforms to transition web-scale practices. Twitter’s transition follows Google’s concerted push of the at-large web towards HTTPS through its Chrome browser warnings. URLs in tweets can come from two primary sources: embedded images and links to external content. Twitter actually distinguishes between these two cases in the underlying tweet JSON record, classifying each URL as a media object or link. The timeline below shows the percent of all tweets in the 1% stream that fell into these two categories, using the metadata Twitter includes with each tweet record. It is important to remember that Twitter only classifies tweet-embedded images as media objects, meaning that a URL link to an image on an external photo sharing site will be counted as a link, rather than an image in the graph below. Percent of tweets in Twitter 1% stream that contained either an embedded URL-based media object or a... [+] link Kalev Leetaru Looking only an external links in the 1% stream, just under 1.7 billion links were shared, of which 981 million were unique (though many were shortened even in their “expanded” form in the JSON record making it impossible to know how many true unique external URLs were pointed to). The expanded URLs listed in the Twitter JSON record hailed from 8.7 million distinct domains, reflecting the incredible variety of the web captured in Twitter’s 1% stream outlinks. The extremely high density of links shared on Twitter each day suggest it could be a powerful resource for web archives to monitor. At the same time, the enormous variety of domains being shared suggest many of these links may be to other social media platforms, product pages and other content that may be of less relevance to resource-constrained archives. The top 30 domains of the expanded URLs in the 1% stream are seen below. Many expanded URLs merely point to other shortening services, but overall there are numerous social media platforms represented. twitter.com 277512131 bit.ly 149263669 fb.me 87000043 goo.gl 58421122 youtu.be 53516567 instagram.com 53033977 du3a.org 41671416 dlvr.it 30291869 youtube.com 27615134 ift.tt 26443383 vine.co 24864516 ow.ly 21376084 ask.fm 18990059 tmblr.co 16746731 tinyurl.com 13704254 instagr.am 13344307 gigam.es 11811292 facebook.com 10558347 d3waapp.org 9783098 twitpic.com 9771329 amzn.to 8954377 4sq.com 6373457 google.com 6182142 qurani.tv 5483571 tumblr.com 5349766 wp.me 5148355 buff.ly 5142870 fllwrs.com 4919621 twcm.me 4791687 path.com 4657660 Many websites offer their own bespoke URL shorteners and compiling an exhaustive list of all shorteners offered worldwide is beyond the scope of this analysis. However, manual review suggests that domains of 7 or fewer characters are most often shortening services. In the 1% stream such domains account for 6% of all domains shared in Twitter links and 47% of all links. Remember that this is after Twitter’s own URL expansion, meaning users are often sharing already-shortened links, rather than asking Twitter to shorten the link directly. Of the remaining 1% stream links to domains longer than 7 characters, 4.8% of domains and 69.8% of links are to domains in the Alexa Top 1 Million list circa May 2018. Around 1.7% of domains and 66.8% of links were to domains in the top 250,000 most popular sites listed in the Alexa Top 1 Million list. News media, on the other hand, does not appear to constitute a majority of links shared on Twitter, accounting for just 0.18% of domains and 5.4% of all links. What is the combined impact of all of this on the total volume of textual content flowing through Twitter’s firehose each day to data miners and social analytics companies? Removing hyperlinks (since shortened URLs aren’t useful for text mining), user mentions (since those are treated as connections rather than mineable text) and retweets (since they are the equivalent of a “like” and useful as a behavioral signal rather than novel content to be mined), the timeline below shows the average characters per tweet and average bytes per tweet in the 1% stream over time. Twitter uses UTF8 encoding, meaning that for languages outside the ASCII characterset, a single letter can require multiple bytes. Average characters and bytes per tweet in the Twitter 1% stream after hyperlinks, user mentions and... [+] retweets are removed Kalev Leetaru Interestingly, the average tweet length globally in the 1% stream has remained unchanged, at around 45 characters per tweet, while the number of bytes per tweet has trended steadily upwards. This suggests Twitter has become more multilingual over time, spreading to languages that require multiple bytes per character or has adopted elements like emojis that require additional storage. What does this mean for the total volume of novel textual content posted to Twitter each day? Multiplying the total bytes per day of text in the 1% stream (after removal of links, user mentions and retweets) by one hundred, the graph below shows the estimated total volume of novel textual content flowing across Twitter’s firehose each day. Estimated total bytes per day of text Twitter-wide after hyperlinks, user mentions and retweets are... [+] removed, projected from Twitter’s 1% stream Kalev Leetaru From an estimated 11GB of text per day in January 2012 to a peak of around 20.5GB of text per day in July 2013 down to around 10.5GB per day in October 2018, the estimated total volume of daily text posted to Twitter is quite small. In total, an estimated 33TB of text was posted to Twitter as a whole over the sample period, after links, user mentions and retweets are removed. How does the total volume of tweet text compare to worldwide online news output as monitored by the open data GDELT Project? Over the period November 2014 to October 2018 GDELT monitored just over 2.8TB of text while the Twitter firehose contained an estimated 16.5TB of text after links, user mentions and retweets are removed. That would mean that the estimated totality of all mineable Twitter text was just 5.8 times larger than news content. That ratio would be even lower if hashtags were excluded. So why do we talk about social media as “big data” if it is not actually that much larger than traditional digital archives like news? The reason is that when we talk about small-message social media platforms like Twitter as “big data” we typically fixate on their total number of posts. Twitter’s trillion-tweet archive certainly sounds at first glance like an enormous archive of content. However, by their very design, small message platforms consist of large volumes of very short posts, meaning the actual total volume of text contained in those trillion tweets is quite small. Given that many text mining tools operate exclusively in English, how much of Twitter is in English? While Google’s CLD2 language detection library is not well-tuned for the brief snippets of text that define Twitter, its open source availability and high processing speed mean it has a history of being used as a rapid screener for tweets even if its error rate is higher than purpose-built tools. To determine the language of each tweet in the 1% stream, hyperlinks and user mentions were removed and the remaining text was sent through CLD2 with its extended language set enabled. The pound symbol was removed, but actual hashtag tags were left in place to ensure the maximal amount of text for each tweet, even though hashtags may skew language classification since non-English tweets may use English hashtags and vice-versa. The resulting timeline below suggests English language tweets do not constitute a majority of tweets and that they have been falling steadily as a percentage of all tweets, at least in the 1% stream. However, again, these results must be taken with a grain of salt given CLD2’s imperfect alignment with the small text snippets of Twitter. Percent of tweets in the Twitter 1% stream that CLD2 classified as being in English Kalev Leetaru Hashtags are used as a linguistic bridge on Twitter, bringing together all of the tweets about a particular topic or event using a common metadata tag, despite differences in their language or word choices. In all, just over 106 million distinct hashtags appeared in Twitter’s 1% stream during the analysis period (using the Twitter-extracted hashtag list in the entities field of the JSON record). The top 10 hashtags appear to be largely technology and entertainment related. iHeartAwards 12400969 gameinsight 11550723 MTVHottest 10301625 BestFanArmy 9038510 RT 7733509 androidgames 6537899 android 6090706 izmirescort 5326195 MTVStars 5209870 BTS 4844403 Intriguingly, hashtag use appears to have leveled off and is actually decreasing on the platform. The timeline below shows the percent of all 1% tweets that contained at least one hashtag per their JSON record, showing a rapid increase from late 2012 through mid-2016, then a sharp drop and a steady decline through October 2018. Percent of tweets in Twitter 1% stream that contained a hashtag as recorded in the entities field of... [+] the tweet’s JSON record Kalev Leetaru It is unclear what might be driving this reduction in hashtag use, but their historical popularity as an organizing and search tool raises questions about whether some new form of organizing behavior is emerging on Twitter to replace hashtags or if users are simply finding ways to express themselves without the summarization of hashtags. Given that Twitter usage is slowing, both in terms of total tweets and unique tweeting users and that retweets are steadily increasing, especially of verified users, this raises the question of whether Twitter is aging. By taking each tweet and computing the number of elapsed days between that tweet being sent and the date the user account sending the tweet was created and averaging that delta across all tweets sent each day, we can compute the average “age” of Twitter’s tweeting user accounts over time. If Twitter was steadily adding new users as it lost users, but merely added slightly fewer users each day than it lost (for example adding 90 new users for each 100 users it lost), the service would experience a decline in overall tweets, but would be kept “young” by a constant influx of new users constantly modernizing it. In such a case, one would expect the average “age” of user accounts to remain relatively constant as older accounts are steadily replaced with newer accounts over time. Alternatively, what if Twitter largely consisted of a core of existing users that was steadily aging, with an ever-slowing rate of user change? Like an aging social club without new members, the platform would continue to be widely used, but the slowing growth in new users would mean the site would become increasingly insular. In such a case the average age of user accounts would be expected to increase linearly over time as more and more of the service’s tweets are written by an older and older established community of user accounts. The timeline below shows the final result, computing the average “age” in days of all user accounts sending tweets in the 1% stream each day. Since averages can be skewed by outliers, the timeline shows both the mean and median ages. Average "age" in days of tweeting user accounts in Twitter's 1% stream Kalev Leetaru The timeline’s steady march upwards reflects a platform with an established user base that is aging over time with a sufficient influx of new users. This can be seen in the timeline below, which plots the median creation year of all accounts tweeting in the 1% stream each day. In keeping with the timeline above, this graph shows a gradual steady aging of the platform. Median creation year of accounts tweeting by day in Twitter’s 1% stream Kalev Leetaru Putting all these timelines together we see a social media platform that is evolving from a place for ordinary citizens to express themselves into a place to forward and share the commentary of others, especially older and elite accounts. From an algorithmic standpoint, Twitter is transitioning from a content dataset filled with text to be mined using natural language processing algorithms into a behavioral dataset containing attentional information in the form of shares. Behavioral data can offer an extremely powerful lens onto global attention but requires very different methodological and algorithmic approaches from the content-based ones we have been using to date. What about the geography of Twitter? Each of the graphs above chart Twitter’s progression through time, but for a platform that is supposed to let us hear from the world, it is space that is even more important to understand. Geography on Twitter comes in two forms: the space from which a tweet is sent and the space about which a tweet focuses. These two geographies are distinct. A tweet might be sent by someone standing in Central Park in New York City, discussing their views on a sporting event taking place in London. Someone else actually sitting in the stadium in London watching the match in person might also tweet their thoughts. While both tweets refer to the same event, knowing that one was sent by someone witnessing it firsthand, while the other represents someone somewhere in the world commenting from afar, might lend more credence to the details of the former. Mapping the locations tweets are about is quite powerful but tells us nothing about whether the users talking about those places are actually physically there or merely speaking about them from afar. Any large corpus of text can be mined for mentions of place. Instead, it is the geography of creation that formed the greatest promise of Twitter in its early years. Twitter embodied the idea that suddenly ordinary citizens anywhere on earth could use their smartphones to live report what they are experiencing and feeling as they go about their lives. In turn, breaking news could be experienced from afar through the eyes of those witnessing and participating in the events. Powering this vision was the idea that all across the world there are ordinary people equipped with GPS-enabled smartphones and the Twitter app. By recording those GPS coordinates with each tweet, it becomes possible to see precisely where a user is when they send a tweet, allowing us to map the global discourse in a way never possible before with any previous dataset at this resolution. To put it another way, every piece of information is created in a specific geographic location. The mobile-centric nature of Twitter meant this geography of creation could be preserved and was envisioned as a primary index over Twitter, powered by GPS-equipped smartphones and users willing to share their locations. So how precisely does one place tweets on a map? There are three primary sources of user geography on Twitter. The most accurate are GPS-tagged tweets in which a user permits their smartphone to attach their precise device-provided GPS coordinates to their tweet as they send it. This permits that tweet to be mapped down to the side of a specific street intersection they were standing on when they sent it. Less accurate are “Place” geotags in which the phone’s GPS coordinates can be used to provide the user a predefined list of nearby locations to select from. Instead of broadcasting their GPS coordinates, the tweet might offer only that it was sent from “London” or “The United Kingdom” or at best a particular business address. Finally, users can also specify their location in free text in either their User Location or User Description fields. These are freeform text, however, meaning users can type anything they want, including fantasy locations like “The Moon” or “The Shire.” Freeform textual locations in the Location and Description fields are not verified sources of information. Users can type any location they want and activism campaigns may even encourage users to change their location to a given city or country to draw attention to events there. The timeline below shows the total percent of tweets and distinct tweeting users that had a non-empty Location field over time in the 1% stream. From a high of 70% in January 2012 the percentage of accounts providing a Location value dropped to around 47% before recovering to around 62% as of October 2018. In total around 51% of distinct users populated the Location field over the seven years, representing around 58% of tweets in the 1% stream. Percent of tweets and distinct tweeting users in Twitter's 1% stream that had a non-empty Location... [+] field Kalev Leetaru Though the Location field can be readily geocoded, its unverified nature and imprecise resolution means it is little more useful than other forms of textual geography. Indeed, Twitter itself cautioned in 2009 that “since anything can be written in this field, it’s interesting but not very dependable.” This is the reason that Twitter officially introduced geotagged tweets in 2009 in which tweets could be directly assigned the GPS coordinates where they were sent. Mindful of the privacy concerns in publishing precise GPS coordinates, Twitter Places followed shortly thereafter, allowing users to share a coarser geographic indicator such as a city or specific business. Rather than offer a precise GPS coordinate, Twitter Places offer a bounding box defining the total area of interest, from which an approximate centroid can be computed for mapping. With the debut of Places, users could choose to share no location information at all with their tweets (the default), share their precise GPS coordinates (the most sensitive) or meet halfway and share only the country, city, neighborhood or business they sent the tweet from. For most mapping uses, it is enough to know that a tweet about a football game came from someone physically sitting in that stadium watching that game in person rather than someone watching it on television on the other side of the world. Knowing the specific seat they were sitting in, courtesy of their GPS coordinates, would be overkill for most purposes. Similarly, a consumer product brand just needs to know that someone in the Wall Street area of Manhattan enjoys their product, they don’t need to know which street corner the person was standing on when they commented about it. At the same time, that sports stadium might find it useful to aggregate fan feedback by stadium section, to understand what fans in each part of the stadium talked the most about. Similarly, a brand might wish to know which of their physical stores a customer was in when they commented about it being run down or having a negative interaction with staff. Thus, there is a natural tension between Twitter’s core design of placing tweets on a map, for which city-level data is likely perfectly fine in most cases and the precision needs of commercial brands wishing to harness tweet geography for enhanced understanding of their customers. So how common are geotagged tweets? The timeline below shows the percent of all tweets in the 1% stream that were geotagged, including both GPS-tagged tweets and Place-tagged tweets. Percent of tweets in the Twitter 1% stream that were geotagged (GPS + Place) Kalev Leetaru Immediately clear is the steady decrease in geotagged tweets over the past seven years, from a high of just over 3.5% of tweets to around 1.5% of tweets since early 2017. It appears that over the last seven years Twitter’s user base has been less and less interested in sharing their location in any form. This has fascinating implications for how we think about locative privacy in the digital era. Given the choice of broadcasting their precise GPS coordinates, sharing only their city or country or sharing no verified location information at all, it appears that Twitter users have decided more and more not to share any form of verified locative information, even coarse city-level information. As companies increasingly look to collect, monetize and even outright sell our realtime locations collected from our phones, it is noteworthy that at least Twitter’s user base has made it clear that they view their verified location as something increasingly too sensitive to share. This unwillingness to share even coarse verified locations has particular ramifications for proposals to combat “deep fakes” by requiring imagery and video to be digitally signed with the user’s GPS coordinates to prove that their smartphone camera was actually at that precise geographic place when it recorded an image. If users aren’t willing to allow their tweets to reveal even their GPS-verified city, why would they permit GPS-based signing of the content they generate for other purposes? Where does Twitter’s geotagged geography come from? The timeline below shows the three primary sources of geotagged tweets in the 1% stream. Percent of tweets in the Twitter 1% stream that were geotagged by source Kalev Leetaru Before Twitter formally supported GPS-tagged tweets, some applications would record the GPS coordinates in the user’s textual Location field. In January 2012 this actually constituted the majority of all geotagged tweets in the 1% stream. Use of the Location field to record GPS coordinates decreased rapidly from the start of the analysis period through early 2015 and as of October 2018 accounted for around 0.08% of all tweets in the 1% stream. This field was largely missed by most early studies of Twitter’s geography, leading to a significant underrepresentation of geotagged tweets. Rather than record GPS coordinates in the textual Location field, Twitter clients are supposed to record them in a dedicated machine-readable geographic coordinates field in the tweet’s JSON record. Tweets containing this field steadily increased through late 2014, then decreased sharply, before falling vertically on April 28, 2015. The overnight decrease of GPS-tagged tweets from around 1.5% of all tweets in the 1% stream, to 0.5% of all tweets on April 28, 2015, to around 0.27% by October 2018 and falling coincides with a policy shift Twitter made regarding Place-geotagged tweets. A Twitter spokesperson confirmed that prior to April 28, 2015 when a user elected to share only a coarse Twitter Place as their location instead of their precise GPS coordinate, Twitter would silently record their precise GPS coordinates in the tweet’s JSON record but would only display the Place in its public interface. Thus, a user that chose to report their tweet as being sent from “San Francisco” would see only “San Francisco” reported on Twitter’s website and apps, but in reality, their precise GPS coordinate had been quietly recorded into the JSON record of their tweet and distributed via all of Twitter’s commercial and free data streams, meaning data miners, social analytics companies and others could precisely pinpoint them. On April 28, 2015 Twitter switched its data streams so that they no longer included a user’s GPS coordinates unless the user explicitly elected to share their exact GPS-tagged location. The impact of this transition can be seen in the timeline below, which shows the percent of all geotagged tweets in the 1% stream that included precise GPS coordinates. Almost overnight geotagged tweets drop from around 80-90% having GPS coordinates to just 20% through October 2018. Percent of geotagged tweets in the Twitter 1% stream that included precise GPS coordinates rather... [+] than centroid-only coordinates Kalev Leetaru To illustrate the impact of this change on the geographic resolution of Twitter, the timeline below shows the total number of distinct coordinates (rounded to two decimals) conveyed by Place-geotagged tweets versus all geotagged tweets in the 1% stream. Geotagged tweets tagged with Place locations have an extremely limited set of locations, averaging only around 10,000 distinct locations on earth each day in the 1% stream. As GPS tagged tweets have continued to decline, the total number of distinct places on earth represented by Twitter’s geotagged tweets continues to decline. Number of distinct coordinates (rounded to 2 decimals) conveyed by Twitter Places versus all... [+] geotagged tweets in the Twitter 1% stream Kalev Leetaru In the place of precise GPS coordinates, we are now largely relegated to Twitter Places, which define bounding boxes around the given geography of interest. This transition is significant both from the standpoint of data analysis and what it tells us about how people view their own location privacy. Precision GPS-tagged posts were one of the selling points of working with Twitter data for researchers interested in mapping societal events and narratives in realtime. Reducing the resolution of tweets from the precision of GPS coordinates down to city-level centroids is devastating to data miners yet is a huge boon for the privacy of Twitter’s users. To a typical Twitter user, the company’s privacy stance of reducing the resolution of geotagged tweets to city- or business-level bounding boxes affords them a level of locative privacy not available through many other popular social networks. At the same time, data miners have historically repurposed these geographic coordinates for all sorts of purposes which were rendered obsolete in the space of 24 hours when Twitter made its switch. This reminds us how much of the “big data” world is based on repurposing data for applications it was never intended for and how easily that data can be changed with a flip of the switch in a way that breaks all of those use cases. Our “data repurposing” economy is far more fragile than we often realize. Thus, since April 2015 the overwhelming majority of available Twitter geography comes in the form of Twitter Places. How different is the geographic picture we receive from Places? Over the last seven years the Twitter 1% stream has recorded a total of 734,049 unique Place locations and a total of 3,360,101 unique location/language pairings (a given location might have multiple languages of tweets tagged as it). Since Place locations are typically mapped by converting their bounding boxes to point-mappable centroids, the result is 688,796 distinct centroids. The top 20 locations are seen in the table below. Place Name Number Tweets Türkiye 1747726 Rio de Janeiro, Brasil 1674599 São Paulo, São Paulo 1095383 Los Angeles, CA 954601 Rio de Janeiro, Rio de Janeiro 902016 Turkey 867954 São Paulo, Brasil 778227 İstanbul, Türkiye 765599 Chile 662416 Houston, TX 659268 Chicago, IL 569738 Manhattan, NY 566372 Porto Alegre, Rio Grande do Sul 560538 Brasil 553391 Ciudad Autónoma de Buenos Aires, Argentina 480294 Curitiba, Paraná 418697 Buenos Aires, Argentina 416561 Texas, USA 416550 Florida, USA 409976 Georgia, USA 409309 Twitter classifies all Place locations into one of five types, ranging from recording just the country the user was in when they sent a tweet down to a specific point of interest like a museum, restaurant, airport or business office. As the table below makes clear, the vast majority of Place-geotagged tweets in the 1% stream record the person’s resolution only down to the city level, followed by administrative divisions (such as a US State) and countries. In all, 98.7% of all Place-geotagged tweets in the 1% stream were city-level or worse. Just 0.43% of Place-geotagged tweets record the actual business address where a user was. Place Type Tweets %Place Tweets City 123,489,522 81.53 Admin 16,381,181 10.82 Country 9,658,987 6.38 Neighborhood 1,284,527 0.85 Points Of Interest 650,379 0.43 Knowing that someone is somewhere in New York City is useful, but when trying to triage a flurry of tweets of a major building fire that has just broken out to alert nearby citizens, it is far more limited. From precision GPS coordinates accurate to which side of a street a person is standing on to just 0.43% of Place-geotagged tweets having even building-level resolution, Twitter has enforced an immense loss of geographic precision, severely limiting its application to topics like realtime disaster response, understanding the street-level flows of urban life and so on. Given that combined, neighborhood and points of interest represent just 1.3% of all geotagged tweets in the 1% stream, the verified geography of Twitter is for all intents and purposes now limited to studying cities at the level of the city itself, rather than at any finer resolution. This is particularly significant in that from the standpoint of verified geographic precision, Twitter is now effectively identical to news media, limited to city-level resolution. In fact, news media may actually offer higher resolution for many events, since it can often include street addresses, business names and other sub-city locative identifiers. What does the centroid-level Place geography of Twitter look like? The map below shows the combined geography of the centroids of every Place found in the Twitter 1% stream 2012-2018, colored by the primary language of tweets from that location, as recorded by the CLD2 algorithm. The color palette was generated using R’s palette generator for the top 100 most common languages and randomized to ensure the best possible visual separation between nearby languages. The image below is also available in full resolution 4K, 16K and 32K versions. Geotagged Place tweets in Twitter's 1% stream colored by language 2012-2018 Kalev Leetaru Compare the image above with that of worldwide news media 2015-2018 in the 65 languages monitored by the GDELT Project (this map uses a different color palette due to its different set of monitored languages). Higher resolution versions of the map are available, including a 16K resolution image. While the image below reflects the textual geography of news mentions rather than the GPS-derived creation geography of the one above, it offers a useful reference of the places where humans reside and care about (and thus places where we would want geotagged tweets from). The stark difference between the two maps reminds us how much of the human world is absent from Twitter’s geotagged tweets. Geography of global online news mentions of place in the 65 languages monitored by the GDELT Project Kalev Leetaru How much did the 80% reduction in GPS-tagged tweets affect the geography of Twitter? The two maps below compare the Twitter 1% stream in the year before and the year after the elimination of most GPS-tagged tweets. While the two images are largely alike, the second is far sparser, reflecting the planned geography of cities rather than the unexpected geography of daily human life. Geotagged tweets in Twitter's 1% stream colored by language April 27, 2014 to April 27, 2015 Kalev Leetaru Geotagged tweets in Twitter's 1% stream colored by language April 28, 2015 to April 28, 2016 Kalev Leetaru The “before” map is available in 4K, 16K and 32K versions for download and the “after” map is also available for download in 4K, 16K and 32K. Unfortunately, the geography of Twitter will increasingly trend towards the “after” map, becoming more and more sparse as the rich detail of the GPS era gives way to constrained city centroids of the Twitter Place era. When asked how Twitter saw this resolution reduction affecting its use cases and whether it was working on ways of encouraging more users to share at least coarse geographic indicators, the company declined to comment. Setting aside what Twitter is becoming, what did Twitter look like at its geographic prime? If all Place-geotagged tweets are combined with the 80.2 million tweets with GPS coordinates (which together record 69 million distinct locations rounded to two decimals) in the 1% stream from 2012-2018, what would the final map look like? The image below presents the sum total of every geotagged tweet in the Twitter 1% stream from January 2012 through October 2018, colored by the most common language of tweets sent from that location per CLD2. For unknown reasons, the coordinates field of geotagged Finnish language tweets from 2015-2018 report a randomized square bounding box over the country, so only Finnish language tweets from January 2012 to December 2014 are included. A similar artifact can be seen at the centroid of Turkey. A Twitter spokesperson clarified that Twitter itself does not perform any jittering or other obfuscation of GPS coordinates, recording them as-is from the device, so it is unclear what might be causing these issues. All geotagged tweets in Twitter's 1% stream January 2012 to October 2018 colored by the most common... [+] language of tweets sent from that location Kalev Leetaru The level of detail captured in this map is exquisite, recording not just the cities and transportation corridors of our shared planet, but the most minute detail about how we go about our lives throughout the world. National borders are clearly visible across the world, as is the monolingual nature of most countries. Zoom into most cities around the world, however, and you’ll notice a vibrant multilingual core representing the diversity of both residents and tourists in the planet’s urban cores. Full resolution versions of the map are available for download in 4K, 16K and 32K resolutions. The 32K resolution image is also available as an online version that can be interactively zoomed/panned. Since the vibrant palette of colors in the map above can make it hard to get a full sense of Twitter’s reach over the last seven years, the image below reproduces the map but colors all points black for greater contrast. All geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Full resolution versions of the map are available for download in 4K, 16K and 32K resolutions. There are also full resolution versions of the map with a black background and white point layer in 4K, 16K and 32K resolutions. The 32K resolution images are also available as a white background zoomable version and a black background zoomable version. Immediately noticeable in the by-language map above is how languages tend to have one or more primary locations where they dominate, but that they also seem to cluster in specific secondary areas elsewhere in the world. This verified linguistic geography raises the question of what it might look like to map just a single language at a time, to show the verified areas where a language has been tweeted most frequently over the last seven years. The map below captures all geotagged Arabic language tweets in the 1% stream January 2012 to October 2018, reflecting both its dominance in the Middle East, but also its prevalence in central Europe (see also 4K and 16K versions of the map). All Arabic language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Despite Twitter being officially banned in China, there are still quite a few geotagged Chinese language tweets in the 1% stream. Outside China, the language is largely found regionally and central Europe and the coastal US (see also 4K and 16K versions of the map). All Chinese language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Estonian language tweets in the 1% stream span the globe, reflecting that for her small population, Estonia has a global presence (see also 4K and 16K versions of the map). All Estonian language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Japanese tweets in the 1% stream paint yet another very distinct geographic profile. Notably, Japanese tweets in the US are found most heavily on the West Coast, the North East and clustered in major US cities (see also 4K and 16K versions of the map). All Japanese language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Portuguese tweets in the 1% stream unsurprisingly cluster primarily in the Western half of Europe and Brazil, but also have high representation across the US and Latin America, as well as Japan and Indonesia (see also 4K and 16K versions of the map). All Portuguese language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Finally, Russian tweets in the 1% stream extend seamlessly from Russia through the former Soviet Republics with another large cluster in central Europe but has little representation in Latin America. Like Japanese it is largely limited to major cities in the US (see also 4K and 16K versions of the map). All Russian language geotagged tweets in Twitter's 1% stream January 2012 to October 2018 Kalev Leetaru Looking back on each of these maps, they reflect a static snapshot of seven years of Twitter compressed into a single image and thus cannot communicate the way in which that geography has changed over time. How has Twitter changed linguistically and geographically over these seven years? The animation below shows all geotagged tweets in the 1% stream by day from January 2012 to October 2018, colored by the primary language of tweets from that location on that day per CLD2. In the space of just 4 minutes and 20 seconds you are seeing seven years of Twitter’s linguistic and geographic evolution go by. Look closely at the map and you will see that, at least through the eyes of its geotagged 1% stream, Twitter does not expand much geographically beyond the places it was used in January 2012. Through April 2015 it largely entrenches in the places it already existed, spreading across Saudi Arabia, for example. The impact of the April 2015 loss of GPS coordinates can be seen clearly in the animation as it suddenly becomes far sparser and afterwards exhibits a gradual reduction in overall geotagged geographic information on Twitter, as the map becomes less and less detailed through October 2018. Putting this all together, what can we learn from all of these images? First off, it is important to remember that all of the statistics here have come from the Twitter 1% stream. This stream has been found to be highly correlated with Twitter as a whole, but if it possible that there are unknown limitations to that reflection. Twitter’s refusal to provide official statistics, however, mean that it represents one of the only external sources we have for understanding the platform as a whole. Moreover, the popularity of the 1% stream with researchers mean that regardless of how well it reflects Twitter itself, it defines the view of Twitter that powers a lot of data analysis. Most obviously we see a platform that is constantly evolving and changing. Over the last seven years Twitter has undergone enormous change that has fundamentally altered the nature of the signal it provides us about the world. The problem is that Twitter’s own users and the world’s social media analysts that use the platform to draw insights about the world have no visibility into these changes and so are unable to adapt their usage and algorithms. For its part, Twitter no longer provides the kind of detailed statistics that would help its users and the data community understand how its platform is changing and adjust their own usage to match. Indeed, the company declined to comment when asked about even the most basic statistics like how many tweets had been sent or how many tweets were retweets. It is inconceivable that one of the world’s most influential social networks operates entirely as a black box into which we have effectively no visibility. For a service that is used by heads of state themselves to connect with their citizenry and which moves markets, understanding the kinds of trends captured in the images above is absolutely critical. From a data science standpoint, it is worth noting that we tout our datasets as reflecting an ever-changing society even while the methods we use to analyze them assume fixed and unchanging characteristics. The images here suggest we need more than ever to take the time to step back and understand the data we use each day, rather than blindly rushing forward with the unnormalized insights they give us. In the end, perhaps the most important takeaway is just how much the black boxes that fuel the “big data” revolution are changing right before our eyes and how fundamentally those changes are altering the nature of the signals those datasets provide us. It is time we acknowledged this rapid rate of change and adapted our methods to match.
3e28f26226e621bcaccf38135acbaf54
https://www.forbes.com/sites/kalevleetaru/2019/03/07/how-data-scientists-turned-against-statistics/
How Data Scientists Turned Against Statistics
How Data Scientists Turned Against Statistics One of the most remarkable stories of the rise of “big data” is the way in which it has coincided with the decline of the denominator and our shift towards using algorithms and workflows into which we have no visibility. Our great leap into the world of data has come with a giant leap of faith that the core tenets of statistics no longer apply when one works with sufficiently large datasets. As Twitter demonstrates, this assumption could not be further from the truth. In the era before “big data” became a household name, the small sizes of the datasets most researchers worked with necessitated great care in their analysis and made it possible to manually verify the results received. As datasets became ever larger and the underlying algorithms and workflows vastly more complex, data scientists became more and more reliant on the automated nature of their tools. In much the same way that a car driver today knows nothing about how their vehicle actually works under the hood, data scientists have become similarly detached from the tools and data that underlie their work. More and more of the world of data analysis is based on proprietary commercial algorithms and toolkits into which analysts have no visibility. From sentiment mining and network construction to demographic estimation and geographic imputation, many fields of “big data” like social media analysis are almost entirely based on collections of opaque black boxes. Even the sampling algorithms that underlie these algorithms are increasingly opaque. A media analyst a decade ago would likely have used research-grade information platforms that returned precise results with guaranteed correctness. Today that same analyst will likely turn to a web search engine or social media reporting tool that returns results as coarse estimations. Some tools even report different results each time a query is submitted due to their distributed indexes and how many index servers returned within the allotted time. Others incorporate random seeds into their estimations. None of this is visible to the analysts using these platforms. There is no “methodology appendix” attached to a keyword search in most commercial platforms that specifies precisely how much data was searched, whether and what kind of sampling was used or how much missing data there is in its index. Sentiment analyses don’t provide the code and models used to generate each score and only a handful of tools provide histograms showing which words and constructs had the most influence on their scores. Enrichments like demographic and geographic estimates often cite the enrichment provider but provide no other insight into how those estimates were computed. How is it that data science as a field has become OK with the idea of suspending its disbelief and just trusting the results of the myriad algorithms, toolkits and workflows that modern large analysis entails? How did we lose the “trust but verify” mentality of past decades in which an analyst would rigorously test, perform bakeoffs and even reverse engineer algorithms before ever even considering using them for production analyses? Partially this reflects the influx of non-traditional disciplines into the data sciences. Those without programming backgrounds aren’t as familiar with how much influence implementation details can have on the results of an algorithm. Even those with programming backgrounds rarely have the kind of extensive training in numerical methods and algorithmic implementation required to fully assess a particular toolkit’s implementation of a given algorithm. Indeed, more and more "big data" toolkits suffer from a failure to understand the most rudimentary issues like floating point resolution and the impact of multiplying large numbers of very small numbers together. Even those with deep programming experience often lack the statistics background to fully comprehend that common intuition does not always equate to mathematical correctness. As data analytics is increasingly accessed through turnkey workflows that require neither programming nor statistical understanding to use, a growing wave of data scientists hail from disciplinary fields in which they understand the questions they wish to ask of data but lack the skillsets to understand when the answers they receive are misleading. In short, as “big data analysis” becomes a point and click affair, all of the complexity and nuance underlying its findings disappears in the simplicity and beauty of the resulting visualizations. This reflects that as data science is becoming increasingly commercial, it is simultaneously becoming increasingly streamlined and turnkey. Analytic pipelines that once connected open source implementations of published algorithms are increasingly turning to closed proprietary instantiations of unknown algorithms that lack even the most basic of performance and reliability statistics. Eager to project a proprietary edge, companies wrap known algorithms in unknown preprocessing steps to obfuscate their use but in doing so introduce unknown accuracy implications. With a shift from open source to commercial software, we are losing our visibility into how our analysis works. Rather than refuse to report the results of black box algorithms, data scientists have leap onboard, oblivious to or uncaring of the myriad methodological concerns such opaque analytic processes pose. Coinciding with this shift is the loss of the denominator and the trend away from normalization in data analysis. The size of today’s datasets means that data scientists increasingly work with only small slices of very large datasets without ever having any insights into what the parent dataset actually looks like. Social media analytics offers a particularly egregious example of this trend. Nearly the entire global output of social media analysis over the past decade and a half has involved reporting raw counts, rather than normalizing those results by the total output of the social platform being analyzed. The result is that even statistically sound methodologies are led astray by their inability to separate meaningful trends in a sample from the background trends of the larger dataset from which that sample came. For a field populated by statisticians, it is extraordinary that somehow we have accepted the idea of analyzing data we have no understanding of. It is dumbfounding that at some point we normalized the idea of reporting raw trends, like an increasing volume of retweets for a given keyword search, without being able to ask whether that finding was something distinct to our search or merely an overall trend of Twitter itself. The datasets underlying the “big data” revolution are changing existentially in realtime, yet the workflows and methodologies we use to analyze them proceed as if they are static. Even when confronted with the degree to which their datasets are changing and the impact of those changes on the findings they publish, many data scientists surprisingly push back on the need for normalization or an increased understanding of the denominator of their data. The lack of a solid statistical foundation means many data scientists don’t understand why reporting raw counts from a rapidly changing dataset can lead to incorrect findings. Putting this all together, how is it that in a field that is supposedly built upon statistics and has so many members who hail from statistical backgrounds, we have reached a point where we have seemingly thrown away the most basic tenets of statistics like understanding the algorithms we use and the denominators of the data we work with? How is it that we’ve reached a point where we no longer seem to even care about the most fundamental basics of the data we’re analyzing? Most tellingly, many of the responses I received to my 2015 Twitter analysis were not researchers commenting on how they would be adjusting their analytic workflows to accommodate Twitter’s massive changes. Instead, they were data scientists working at prominent companies and government agencies and even leading academics arguing that social media platforms were so influential that it no longer mattered whether our results were actually “right,” what mattered was merely that an analysis had the word “Twitter” or "Facebook" somewhere in the title. The response thus far to this week’s study suggests little has changed. In the end, it seems we no longer actually care what our data says or whether our results are actually right.
1211a70af5107826b6907f85e587e207
https://www.forbes.com/sites/kalevleetaru/2019/03/08/was-the-era-of-social-media-big-data-based-on-false-hype/
Was The Era Of 'Big Data' Social Media Based on False Hype?
Was The Era Of 'Big Data' Social Media Based on False Hype? One of the most surprising findings when looking back at Twitter’s evolution from 2012 to 2018 is just how small it turns out social media really is. For years the public narrative around social platforms has been that they were the flag bearers of the “big data” revolution, holding some of the largest datasets in the world that could offer unprecedented views into the heart of human society. In reality, it seems social media is far smaller and more limited than we ever realized. Using Twitter’s 2012-2015 trajectory, the service was estimated to be not much larger than the online news media it was supposed to replace. Armed with Twitter’s actual size over the last seven years, it turns out those original estimates were far too generous. Twitter’s slow decline and rising retweet rate mean the total amount of unique content per day in Twitter’s firehose is becoming smaller and smaller. Since Twitter's founding 13 years ago, it appears there have been only around 1.1 to 1.2 trillion tweets and their small size means the actual total size in bytes of all the unique textual content that has ever flowed across Twitter’s servers is extraordinarily small. Over the last seven years, covering Twitter’s peak growth period, there has been less than 33TB of mineable original text in all. On a typical day at the start of 2012 there was around 11GB of novel text posted to Twitter, rising to around 20.5GB of text a day by July 2013. Yet that number has steadily shrunk, reaching just 10.5GB a day by last October and is on a downward trajectory. Newly emerging statistics from Facebook's research dataset collaboration suggest it too is vastly smaller than believed. The company's vaunted hyperlink dataset containing “almost all public URLs Facebook users globally have clicked on, when, and by what types of people” is almost three times smaller than a similar news-based link dataset, despite being collected over twice the timespan. For all these years we have stood outside the walled gardens of the major social platforms, projecting onto them our own dreams and imaginations of what their datasets must look like. Instead, as we are getting our first glimpses inside their immense data estates, we find this promise was all hype. In reality these vast social media archives aren't much larger than the traditional data streams that came before. Twitter’s total textual output over the last seven years is just a few tens of terabytes, while Facebook’s master link dataset is just a fraction of a single news linking dataset. Why then do we think of Twitter and Facebook as being so massive? The answer is that Silicon Valley has mastered the art of the reality distortion field, encouraging us to project our own dreams upon them while avoiding anything that might bring us back to reality. In the case of Twitter and Facebook, both companies released copious statistics in their early high-growth periods, chronicling their every milestone. As growth slowed, they pulled back on these statistics and eventually both companies largely halted regular releases of detailed growth statistics. Even when pressed, the companies steadfastly decline to release any kind of statistics, from volume counts to the false positive rates of their algorithms. In the absence of official statistics, users have been free to imagine the companies holding immeasurably large archives of human behavior. The implications for this false narrative we have created around social media are profound. First and foremost, the incredibly small actual size of these social platforms reminds us that the insights we receive from social media are not nearly as representative as we’ve been led to believe. In the case of Twitter, the total number of tweets is shrinking, tweets are increasingly merely retweets and the accounts posting all of that content are older and older. Geography is becoming less and less available and less and less precise. The few statistics available for Facebook, from the size of its link dataset to the size of the training data it uses for counter-terrorism, suggests that it too is vastly smaller than we have believed. Putting this all together, our “big data” signal turns out to have been actually quite a small data signal and is getting smaller, less precise and less representative by the day. In the end, rather than continuing to put social media up on a pedestal of our imagination, perhaps it is finally time for society to accept reality and look for new, more representative and more privacy-protecting ways of understanding ourselves before the public begins to lose faith in the “big data” world.
e3bbc6893d1b8d4ddd314ab8a66b2bf5
https://www.forbes.com/sites/kalevleetaru/2019/03/11/global-mass-surveillance-and-how-facebooks-private-army-is-militarizing-our-data/
Global Mass Surveillance And How Facebook's Private Army Is Militarizing Our Data
Global Mass Surveillance And How Facebook's Private Army Is Militarizing Our Data Facebook logo. (Jaap Arriens/NurPhoto via Getty Images) Getty Last month, Business Insider published an extraordinarily detailed look into the private army protecting Facebook and its founder. While much of the expose revolved around the company’s traditional security measures like armed guards and executive protection details, what stood out was the company’s close relationship with law enforcement and its use of traditional government surveillance technologies from secret cellphone tracking to license plate scanners to the proposed deployment of facial recognition to its global surveillance camera network. What can we learn from the way in which Facebook is in many ways militarizing its two billion users’ data? One of the most notable aspects of last year’s non-stop deluge of privacy and security-related Facebook news is that despite security breach after security breach in which its two billion users’ data was stolen, harvested or otherwise misappropriated by third parties, the company’s own data has remained largely untouched. Despite seemingly unable to secure the most intimate and private information of its users in any way, when it comes to the company’s data, it has a formidable security apparatus that ensures few leaks. In other words, when it comes to the safety and security of its users, the company has found itself again and again unable to protect them from harm. When it comes to itself, the company has invested heavily in ensuring it does not suffer similar breaches. It seems this placement of its own safety over that of its users extends to the physical world as well. Business Insider’s documentary on Facebook’s physical security practices notes that the company employs, among other technologies, license plate scanners as part of its surveillance camera network and has considered applying facial recognition to its cameras to instantly recognize those entering or moving about its buildings and grounds. Asked where the company obtains the license plate information of users it wishes to track, a company spokesperson declined to comment. Asked whether Facebook receives license plate information through its relationship with law enforcement and whether it maintains its own databases of license plate information obtained through any means, the company again declined to comment. The ability of social media companies to stockpile databases of license plate information to use in their own operations is of great concern and raises the question of what else Facebook could do with all of that information. Recall that Facebook has previously proposed selling facial recognition capabilities to retail stores that would tie the people walking around their storefronts back to their Facebook profiles without their consent. One could easily imagine Facebook constructing a vast database of license plates that would add a list of all vehicles owned by each of its two billion users. This could in turn be sold as a service to governments or private companies to turn private internet-connected camera networks into realtime surveillance platforms. Coupled with Facebook’s vast facial recognition database, the company could offer a realtime parking lot inventory to a store, along with the identity of each driver and passenger in all of those vehicles and their realtime location within the store, along with all of their interests and an estimate of whether they are a potential shoplifter or a valuable potential customer. It could then turn around and sell a feed to law enforcement and government security services that provides them the realtime location of every citizen and visitor within their borders as they go about their daily lives. Even those going to great lengths to conceal their location from security services by not carrying any electronic devices would likely eventually be spotted by one of these Facebook-connected cameras. Again, the company declined to comment about its license plate scanning technology. Tellingly, when asked whether the company had since deployed facial recognition to any of its security cameras, the company declined to comment. For a company that typically pushes back forcefully on privacy-related rumors, the fact that Facebook did not deny having now deployed facial recognition to its camera networks is a critical reminder of the rights we grant it to our private data. Given that Facebook already transforms our private photographs into monetizable facial recognition models that it uses to identify images we appear in, it doesn’t take much imagination to foresee the company simply taking those existing models and connecting them to its global network of surveillance cameras. The company’s generous exceptions to its privacy and safety settings allow it to ignore most user preferences when it comes to the safety and security of the platform, opening the possibility that it could build facial recognition models even for those who have explicitly demanded the company not do so, potentially placing the company in conflict with GDPR for its EU users. Once again, the company declined to comment on where its facial recognition imagery comes from or whether it would commit to not mass harvesting its users' private imagery to build its global surveillance network. The company’s cozy relationship with law enforcement also raises serious questions about its commitment to privacy. The kind of mass facial recognition and tracking platforms the company has built for itself are tailor-made for the needs of repressive governments all across the world, suggesting that much like its advertising selectors, its physical surveillance platform will soon be co-opted by governments to outsource their intelligence and repression needs, if it hasn't already. Putting this all together, in the end, we are once again reminded that in our rush to livestream our lives and pour our most intimate moments into Facebook’s walled garden, we are signing away the rights to our privacy, our thoughts and even our likenesses to a private company that has the legal right to do whatever it pleases with our information and is under no obligation of any kind to tell us what it does with and to our very lives. Orwell would be proud.
c07edaa863bf9a26aa98245f4beabd01
https://www.forbes.com/sites/kalevleetaru/2019/03/14/are-our-intelligence-agencies-getting-the-wrong-advice/
Are Our Intelligence Agencies Getting The Wrong Advice?
Are Our Intelligence Agencies Getting The Wrong Advice? CIA Headquarters. (Brooks Kraft LLC/Corbis via Getty Images) Getty Perhaps the greatest question as we look back on the state of data analytics today is how we got here. How did the early mathematical-driven promise of “big data” and “social media analytics” devolve into a marketing-led hype-laden world of hyperbole in which we no longer seem to actually care what our data says or how it is changing out from underneath us? Part of the answer may lie in the fact that even our nation’s most prestigious scientific advisors have succumbed to the siren song of mining “must have” datasets without fully understanding what they look like inside and whether the insights they yield reflect reality. The National Academies recently completed a report for the US intelligence community exploring the use of emerging technology, including social media, for US intelligence needs, outlining a 10-year research agenda. The Academies’ response to questions about the social media components of that report remind us that even our most respected and austere scholarly institutions are not immune to the hype around “big data,” with dangerous implications to the future of our national security. The National Academies of Sciences, Engineering and Medicine was commissioned by the US Office of the Director of National Intelligence (ODNI) to “explore opportunities for research from the [Social and Behavioral Sciences] disciplines to support the work of intelligence analysts and enhance national security … [and assist] in developing a 10-year agenda for SBS research with applications to intelligence analysis.” The final report, titled “A Decadal Survey of the Social and Behavioral Sciences: A Research Agenda for Advancing Intelligence Analysis,” offers that “the primary function of the intelligence analyst is to make sense of information about the world, but the way analysts do that work will look profoundly different a decade from now. Technological changes will bring both new advances in conducting analysis and new risks related to technologically based activities and communications around the world. Because these changes are virtually inevitable, the Intelligence Community will need to make sustained collaboration with researchers in the social and behavioral sciences (SBS) a key priority if it is to adapt to these changes in the most productive ways.” While the report’s overarching focus is on the evolving landscape of intelligence analysis in a world upended by technological change, there is an outsized focus on social media, especially Twitter. In fact, Twitter alone is mentioned 64 times and forms many of the report’s case studies and examples. Yet, the report largely fails to acknowledge or consider the immense change Twitter has undergone as it has evolved over the past 13 years and the enormous implications its transition from a content to an attentional source has on the kinds of intelligence questions most commonly posed to it. When asked for comment, the National Academies had one of the report’s committee members provide responses. With respect to why the report did not address Twitter’s existential change, the committee member offered only that “discussion on how Twitter has changed, or indeed any social media's changes, is beyond the scope of the report.” The problem with this response is that it reflects the enormous gap between the accuracy needs of academia, where there often is no “right” answer and errors have little consequence and those of the intelligence community in which an error can cost human lives and even lead to armed conflict. While to academia it may simply be “beyond the scope” of its work to discuss how social platforms are changing, to the intelligence community the report is intended for, those changes are existential to their work. Understanding the nature of an intelligence signal and how it is changing over time is the most basic and central step of an intelligence analysis and one that has received far too little attention within the community with respect to emerging social datasets. Indeed, most governmental social analyses still report absolute volume counts and raw trends without any understanding that their policy findings merely reflect Twitter’s changing baseline rather than anything to do with the policy question they are analyzing. Asked specifically about the issue of normalization, the committee member again responded that “we simply did not have time or space to discuss all relevant issues.” The issue of normalization is not some obscure arcane peripheral topic with no relevance to social analytics. In contrast, it sits at the very core of the entire analytics pipeline, determining whether the results we receive have any meaning at all to the questions we pose. Dismissing normalization as something easily discarded out of space concerns reflects a critical failure to understand just how existentially social media is changing and how directly those changes impact the kinds of questions asked of such data by the intelligence community. Across the intelligence community there is far too little understanding of the existential change social platforms are undergoing and just how much of an impact those changes are having on their analyses. Even as they recognize the changing nature of other signals, analysts still largely treat social media platforms as static and unchanging. Far from being “beyond the scope,” a discussion of just how much social platforms are changing and the direct impact of those changes on the analytic workflows of the intelligence community is effectively mandatory with respect to the committee's charter of outlining the future of analysis. Given the report’s focus on charting the evolving landscape of intelligence analysis in a world upended by technological change, it would seem that absolutely central to the Academies’ report would be the theme of how these new insights compare with the ways in which intelligence collection and analysis has been conducted in the past, especially the ways in which the new insights gained differ from those before, the new ways in which those insights can be manipulated and impacted and both the strengths and dangers of these new datasets. Yet, here again the response was that such a conversation was negated by time and space constraints. This raises the question of why encourage analysts to study new forms of media when the signals they receive are no better than their current datasets and are in fact far more vulnerable to manipulation? Perhaps most telling were the committee member’s responses as to why Twitter was so prominently featured in the report. The member offered that Twitter was an outsized focus of the report because “it is the most available platform for scientists to use” and “has been the most studied.” This is precisely the response I’ve heard again and again over the past week and a half: that it no longer matters whether the results we receive from Twitter are correct or complete, we have no choice but to study it since it is the shiny object that lavishes grants and publications upon its examiners and is what everyone else is using and is easy to download. Once again, this reflects the difference between academia and the intelligence community. Academia is prone to chasing short term fads in search of funding and publications. It searches for the easiest data it can find and work with, regardless of accuracy and typically exhibits flocking behavior in which grant and publication trends skew towards whatever dataset or question is currently in vogue, rather than setting out to answer society’s most pressing questions using the best data available. In contrast, for the intelligence community, accessibility of a dataset is important, but the accuracy and depth of the insights offered by that data are far more important. Sometimes the two may be closely aligned but in cases where they are not, accuracy and reach are more important than free and famous. The committee member also offered that “Twitter is a major environment in the chain by which information gets to people” and thus is critical to study regardless of how existentially the platform is changing. She went further to offer that “most people, including the IC, get their news now through on-line sources including social media.” Yet, if the focus was on the most critical social media links in the informational ecosystem of what informs a public, then it would seem the report should have focused most heavily on Facebook, which accounts for almost four times as much news consumption as Twitter. Moreover, it appears news consumption through social platforms is slowing. The focus on Twitter also fails to acknowledge Twitter’s very Western-centric prominence in the informational ecosystem. It is extraordinary that the National Academies report makes not even a single mention of Whatsapp, given its enormous influence in spreading misinformation in many parts of the world, such as India. In fact, the Academies’ report focuses almost the entirety of its attention on the platforms most predominate in the West, reflecting its committee membership, rather than looking at the global perspective of information production and consumption in both traditional social media and other forms, that is of the greatest importance to the intelligence community. More to the point, within the intelligence community, Twitter in particular is still presented as one of the central tenants of “social radar” in which social platforms are used to observe ordinary citizens all across the world, cataloging the events and narratives they experience and believe in realtime. As Twitter increasingly becomes an echo chamber of elite retweeting and less and less a place for ordinary people to share their own perspectives, this primary intelligence use case is being decimated, yet the report spends little time on the ways in which social platforms are evolving to negate the core demands of intelligence analysis and how the intelligence community might adapt. Asked about the impact of social platforms’ increasing efforts to curtail surveillance and governmental use of their data and the ability of the platforms and adversaries working with the platforms to deliberately influence intelligence findings due to their centralized natures, again the response was that “we were constrained by space.” Ironically, despite offering that “our charter was to look to the future about what needed to be considered,” much of the Academies’ report was centered on the platform specifics of the present, especially Twitter, rather than the broader platform-independent issues that will define the future of data-driven intelligence using open sources. Partially these issues reflect the ODNI’s desire to look beyond the traditional community most familiar with its unique analytic needs (though some committee members have been funded by the defense community), but in doing so it risks being led astray by the growing focus on marketing over mathematics. Putting this all together, in looking back over the reactions of the past week and a half to Twitter’s evolutionary arc over the past seven years, one could possibly dismiss them as the misinformed musings of the academic, commercial, governmental and NGO communities. Yet, as this hyperbole seeps into even the nation’s most prestigious scientific organizations in the advice they provide to the intelligence community in guiding our national security, it reminds us just how thoroughly the reality distortion field of Silicon Valley has upended our once-sacrosanct fixation on data and statistics. In the end, as even our most austere institutions buy into the hype and hyperbole of the “big data” era, it may simply be time to accept the end of statistics and join our colleagues by closing our eyes and leaping into our hyperbolic future in which “data” is merely a marketing term.
fe0b6cea9f4891363269e7b31b831a8d
https://www.forbes.com/sites/kalevleetaru/2019/04/18/why-do-we-believe-what-we-read-on-the-internet/
Why Do We Believe What We Read On The Internet?
Why Do We Believe What We Read On The Internet? In the early days of the modern web there was a running joke about not believing everything one read on the Internet. It seems somewhere along the way our distrust of unknown information gave way to blind trust in anything and everything we see online. Just how did we get to the point where we believe what we see on the web without verification and how has this helped accelerate the modern deluge of misinformation? The January 2017 rise of the “rogue” Twitter accounts claiming to be disaffected government employees “resisting” a new administration they did not agree with marked a watershed moment in just how far society’s information literacy had fallen. Suddenly the nation’s scholarly and scientific elite suspended all disbelief and simply blindly accepted that a set of anonymous Twitter accounts were who they claimed to be. Despite refusing to produce even the slightest evidence supporting their claims to be run by US Government officials, the accounts managed to convince a wide swath of the nation’s most educated and evidence-based elite, including many in the journalism community more accustomed to debunking unsupported myths than embracing them with open arms. Even when the anonymous accounts began fundraising, there was nary a concern raised among their supporters that some degree of identity confirmation was needed. In a strange twist of irony, the very communities that had long lampooned and lambasted the public’s information illiteracy and failure to carefully research claims they found on the internet, suddenly found itself happily embracing a myth it had not a bit of evidence to support, just because they read it on the internet. How did we get to this point? Much of the modern digital misinformation landscape owes its rise to the collapse of the traditional gatekeeper model that has historically governed the informational landscape of societies. Citizens are taught from an early age to accept information provided by elites, ranging from the government to the mainstream news media to academia, on face value without question. This model historically worked to some degree because governments had the force of law and physical coercion to enforce their version of “truth” regardless of whether it reflected reality. Accepting the government’s version of events, at least publicly, kept one safe from immediate physical harm. The news media in the post-World War II era reorganized to provide a form of mutually-reinforced objective recounting of events, while academia faced peer review and editorial processes to weed out questionable theories and experiments. While imperfect, these systems of checks and balances at least helped moderate the flow of false information. In Western societies, these gatekeepers carefully controlled the flow of information, dictating the events and narratives that shaped the national conversation. The modern confluence of a breakdown in trust in our institutions, coupled with the collapse of the gatekeeper model has meant that society has suddenly been thrust into an informational void without the proper training in how to rigorously evaluate the information in front of them. Instead of browsing a small carefully curated set of high quality informational streams, our online citizenry are thrown into an ocean of almost limitless low-quality information, forced to expend considerable effort to forage for the rare bit of accurate insight. Rather than professional journalists carefully researching stories and filtering out rumors and falsehoods, these dedicated gatekeepers have increasingly been bypassed by modern digital platforms that allow the public to access information directly, without these protective filters. In essence, rather than turning to the professional research librarian at the local library that is deeply familiar with reference materials, we turn to a web search and click on the first link, regardless of how questionable the site. Unfortunately, our education systems have not adapted for this new digital age. In a world in which everyday citizens must fend for themselves in the digital free-for-all, students must be taught how to conduct what amounts to the kind of investigative research historians specialize in. Instead of promoting speed over accuracy, schools should teach students to step back and prioritize getting stories right rather than being first. Even within the community of professional online researchers, there is often a lack of familiarity with basic historical research principles like the difference between primary and secondary sources. Historical research has much in common with the skillsets required of the digital age. What if we required every high school graduate to learn the basics of historical research, from understanding sourcing, to resolving conflicting information, to looking across vast reams of information to determining an ultimate conclusion and their confidence in that outcome? The verification and evidentiary mindset of historical research are particularly relevant to the online sphere. Much as historians must piece together conclusions by evaluating a conflicting and chaotic information environment that is filled with gaps, untrustworthy and biased sources, false information and competing perspectives for which there may be no single “correct” answer, so too does discerning “fact” from “fiction” on the web require a similarly skeptical approach to information. Could our history and information science departments play a larger role in helping to prepare society for the information literacy requirements demanded of our new digital world? Unfortunately, today’s academic institutions increasingly see misinformation as merely a buzzword to sprinkle liberally on grant applications and in paper abstracts like fairy dust. Unlike their WWII-era colleagues who rotated through government and battle tested their theories in the real-life world of combat propaganda in defense of their very homelands, today’s misinformation scholars rarely have even the slightest practical experience in the topics they claim to be experts over. Sadly, this has led to a world in which so much of the digital misinformation scholarship emerging from academia lacks any grounding in reality. It is equally important that we not conflate technical literacy with information literacy. Policymakers promoting programming and data science courses in schools often equate such efforts as steps towards combatting misinformation but knowing how to code has nothing to do with knowing how to think critically about the information in front of you. In fact, as Silicon Valley has reminded us so many times, blind faith in algorithms may actually make the spread of misinformation worse. Putting this all together, in the end, to truly create an informed society resilient to misinformation, we must look beyond the quick fixes of simplistic rankings and naive algorithms, to teach our citizens to be information literate consumers of the world around them.
01f21cda173f11ba934e34212a2a57e8
https://www.forbes.com/sites/kalevleetaru/2019/04/23/social-media-has-taught-us-to-talk-rather-than-listen/
Social Media Has Taught Us To Talk Rather Than Listen
Social Media Has Taught Us To Talk Rather Than Listen Social media’s great promise was that it would give everyone in the world a voice and bring us together in enlightened conversation about our shared future. The first half of that promise has come true for a quarter of the earth’s population, though most of the world remains absent from the digital revolution. It is the failure of the second half of that promise, to create shared conversation, that has so bedeviled the online world and led to the toxicity and hate, falsehoods and ignorance that threaten to drown out the kinds of enlightenment and community building that can only come from dialog rather than the monologues that rule supreme on social media today. In short, social media has taught us to talk rather than listen. One of the most consequential aspects of Twitter’s evolution over the past seven years is the way in which it has devolved from a place for ordinary people to share their thoughts and experiences into a place the public comes to retweet celebrities. Twitter’s skyrocketing retweet rate and collapsing reply rate remind us that more and more, we talk but we don’t listen. We no longer come to Twitter to gain insight from others and engage in thoughtful and knowledgeable dialog about mutual interests. We line up for a moment of Twitter’s megaphone to broadcast our own thoughts to the world but have little interest in thoughtfully considering the reaction from others. Those that agree with us we considerately retweet or thank, but those who disagree with us we either ignore or silence with vitriol. On social media the one who screams loudest and most forcefully is the one who ultimately defines reality. Indeed, as the Librarian of Congress warned half a century ago, “it is the very simple technique of repeating and repeating and repeating falsehoods, with the idea that by constant repetition and reiteration, with no contradiction, the misstatements will finally come to believed.” It seems some things never change. In many ways social media, especially Twitter, have merely brought the longstanding practices of academia to the mainstream. In the academic world, each new paper must demonstrate its worthiness of publication by drawing a distinction between itself and the literature that has gone before. This involves what amounts to critiquing past work and calling attention to what the author believes are their limitations. Few scholars actually contact the authors of the papers they cite in their background sections, meaning their critiques of those works may not actually be correct and reviewers aren’t always in the best position to adjudicate such critiques. Scholars rarely respond when asked to correct errors in their citations, even factual errors that undermine the entire outcome of their paper, while journals are similarly rarely eager to retract or force revisions. Some journals go so far as to threaten legal action against authors who request that citations to their works be corrected, due to the embarrassment of a journal having to acknowledge a revision or retraction. The end result is that the core of the academic enterprise involves criticizing others from afar without granting them the opportunity to respond or refute those criticisms. Sound familiar? Social media has brought this model to the general public, building platforms that encourage speaking without listening. Imagine if Twitter required users to carefully read at least fifty tweets for each tweet they were permitted to post (checking to make sure the user actually paused the appropriate amount of time to adequately read each rather than merely fast scrolling through them). This would encourage users to spend more time listening than posting. Alternatively, what if Twitter required users to reply to at least ten tweets for each original tweet they themselves posted? (With appropriate content analysis to ensure the replies were relevant to the tweets they were in response to.) This would force users to engage in mutual dialog rather than merely talking past one another. Instead, our social platforms are built to encourage precisely the opposite behavior: singular contribution without the requirement of listening or engaging with others. Users can post almost limitless content without ever consuming or engaging with the posts of others. These superposters become the draw for lurkers who merely consume without posting themselves, but who are still monetizable. Users come to social platforms to consume content and thus social media companies have built their interfaces to incentivize contribution in the most frictionless way possible. An unfortunate consequence of this design is that users are encouraged to offer their perspectives on anything and everything, regardless of whether they have the slightest background knowledge or experience to actually understand what it is they are attempting to comment on. Putting this all together, social media has created a world in which we as a society have been taught to speak rather than listen, with incentive structures designed to prioritize screaming monologues over thoughtful dialog, with the loudest one winning. In the end, social media has failed to live up to the most important part of its promise: bringing us together. Instead of creating a place where we can all come together and engage in conversation in the global town square, we’ve ended up with a great gladiator match of megaphones in which the loudest and most toxic one prevails. Perhaps someday we’ll finally learn to listen.
0cf4aaca5eb7d3b176d9a4acd0bcc813
https://www.forbes.com/sites/kalevleetaru/2019/04/25/will-we-really-need-humans-to-fix-the-robots/
Will We Really Need Humans To Fix The Robots?
Will We Really Need Humans To Fix The Robots? In every conversation of the coming apocalypse of AI and robotics taking human jobs is a discussion of how the rise of intelligent machines will create a vast class of new and better human jobs taking care of those robots. This optimistic line of reasoning suggests that as AI and robotics displace lower-paying manual and rote labor, they will create an entirely new class of jobs for those that design, install and maintain these complex systems. Yet, what happens if these new generations of machines are able to design, install and repair themselves without human intervention? Would there be no future for human employment? Central to our automation-driven future is the idea that for each set of rote human jobs taken over by a robot, an alternative human job will be created that involves greater creativity or thought and thus a higher salary. An assembly line full of human welders replaced by robotic welders will require one or more humans to design those robots, install and customize them and eventually maintain and repair them. Yet, what happens as deep learning becomes increasingly good at the very kinds of design, customization, implementation and maintenance tasks that we are depending on humans taking over? Historically a human was required to build a robust deep learning algorithm. Yet, as automated algorithmic construction systems increasingly make it possible for algorithms to help construct new algorithms, we are approaching a point where machines are gradually able to help customize and improve themselves within the confines of narrow application domains. As these systems advance, it is likely that humans will be needed less and less to design and customize AI tools for each individual application. Building a state-of-the-art image recognition network historically required a team of specialized machine learning experts, a massive training dataset and immense hardware resources. Today anyone can leverage new self-constructing AI systems to build a bleeding edge image recognizer with no machine learning experience or even computing experience at all and a relatively small collection of training examples. Transfer learning, automated algorithmic construction and tuning and the cloud handle the rest. Domain adaptation similarly once required vast human teams of specialists but can increasingly be performed with point-and-click ease using the same process. Thus, a company wishing to launch a new AI system can often build it in plug-and-play fashion using cloud tools that help construct themselves. In fact, today’s more advanced AI construction systems require no human in the loop at all. The user simply uploads labeled training data, clicks the “create model” button and that’s it – a few hours to days later the user has a state-of-the-art algorithm ready to use. New advances are even making it possible for AI systems to monitor their own error rates over time, increasingly automating the process of correcting for input drift. Robotics remains one of the few areas where humans are still currently required due to the need to operate in the fluid and complex physical world. Yet, as deep learning-powered robots continue to improve in dexterity, navigation and problem-solving skills and reasoning capabilities, we will eventually begin to see robots that can repair other robots. In fact, robot repairing robots actually make a lot more sense than using humans, since robots can be designed with specialized appendages designed for the specialty mechanical devices they must interact with and repair, such as combined gripper/welder units or snakelike scopes that can reach deep within a confined robotic body without disassembling it. After all, even human surgery increasingly relies on robotics and machine assistance. The structured nature of robotic bodies and their well-defined electronic innards makes testing and repairing them far easier for machines than humans. Putting this all together, the accepted wisdom that the coming automation revolution will require an army of new human jobs to design, install and maintain those new AI and robotic systems is not necessarily the case and as our systems improve they may not require humans at all.
ed4f436a0c4de7e5c1042de05ad8f977
https://www.forbes.com/sites/kalevleetaru/2019/05/05/facebooks-edge-ai-content-scanning-brings-nsa-style-surveillance-and-censorship-to-the-planet/?fbclid=IwAR3a9xAiQOTXcR1fLPx9aZGRpqbTkKKclAQuSpOJQr3FqI5dgyV17X_eDcY
Facebook's Edge AI Content Scanning Brings NSA-Style Surveillance And Censorship To The Planet
Facebook's Edge AI Content Scanning Brings NSA-Style Surveillance And Censorship To The Planet Perhaps the most Orwellian of all of Facebook’s presentations at F8 last week was a little-noticed presentation about the company’s huge investments towards performing content moderation directly on users’ phones, allowing Facebook to scan even encrypted person-to-person WhatsApp messages for content Facebook or repressive governments dislike. Today Facebook scans posts after they are uploaded to its central servers, but as the company moves towards a decentralized person-to-person encrypted communications world, it is aggressively eyeing moving content filtering directly to users’ phones. Coupled with previous statements from the company that it considers a user’s private non-uploaded camera, microphone and photo gallery to be fair game for Facebook to search without notification or permission, Facebook’s shift to the edge sounds so Orwellian it makes the NSA spying efforts disclosed by Edward Snowden sound like child’s play. Are we really rushing towards a world in which Facebook-produced AI algorithms will be running directly on our phones, monitoring every second of our lives, eliminating encryption, preventing us from seeing or saying anything the company doesn’t like and even sending alerts when we utter an unauthorized phrase? Is this truly to be our future? At first glance, the F8 talk “Applying AI to Keep the Platform Safe” seems like a typical technical deep learning presentation. A series of Facebook engineers take the stage to talk about the complexities of running large advanced AI models on mobile hardware, from model compression and quantization to power consumption to minimizing content hashing signature databases to all of the usual topics found in the kind of engineering talks found at any AI conference. Look a bit closer at the examples the company offers and the ramifications of the vision it presents and a whole new Orwellian world emerges. As Facebook has publicly pivoted towards a new “privacy-first” vision of its platform, transitioning from public web host to private encrypted communications provider, there has been considerable discussion regarding how the company will be able to enforce its content moderation rules and create the behavioral and interest models it needs to sell ads, when everything that reaches its servers is encrypted. I have long suggested that Facebook would inevitably turn to the edge to circumvent encryption, running its moderation and advertising models directly on users’ devices. It turns out that is exactly what they intend to do. Terrorist organizations have been rapid adopters of off-the-shelf encrypted messaging platforms, leveraging the protections of free military-grade encryption to shield their recruitment and operational planning from authorities. This has presented a unique conundrum to social media companies whose products have been repurposed to support terrorism: how can they filter for terrorist content when everything that reaches their servers is encrypted? The answer, as I’ve long noted, is to perform that filtering on users’ devices themselves. The original unencrypted message can be scanned on the sender’s phone before encrypting it for transmission and the decrypted message can be rescanned on the recipient’s device after it has been decrypted for display. In its presentation, Facebook discusses its advances in performing content signature scanning directly on users’ phones. A database of signatures of known disallowed content is uploaded to the user’s phone and every message they attempt to send is scanned against this database directly on the user’s phone before the message is accepted for delivery to Facebook. Even encrypted messages can be scanned in this way prior to encryption, eliminating the ability of bad actors to share known illicit content through encrypted channels. While Facebook uses the example of child exploitation content in its presentation, the same process could easily be used to combat the sharing of known terrorism content. Yet, the company goes far beyond simple known content blocking, towards describing a future where its entire AI-powered content moderation infrastructure lives on users’ phones instead of Facebook’s servers. The company would regularly push out updates to a gallery of AI algorithms and signature databases that would reside on users’ phones and scan all of their content before it is posted or sent as a message. A private text-only WhatsApp message between friends that violates Facebook’s content rules would be deleted on the sender’s device before it can ever be sent. In short, by moving content moderation to the edge, Facebook will no longer be deleting bad content hours, days or even weeks after it has gone viral and spread to the far corners of the web – it will delete posts before they can ever be sent in the first place. While few would shed tears about a terrorist no longer able to share recruitment propaganda or a hate group inciting violence against a minority group using WhatsApp, the fact that Facebook’s moderation algorithms are being built in the dark means we have no visibility into just what will constitute prohibited speech in the eyes of Facebook’s increasingly AI-driven future. Will any discussion of government regulation of Facebook or new data privacy laws be banned as “unacceptable speech” and every encrypted WhatsApp message mentioning something negative about Facebook deleted before it can be sent? It is unfortunately not a far stretch to see Facebook take its “logo use rules” that were at the root its deletion of Sen. Elizabeth Warren’s ads and encode those into an algorithm that deletes any private encrypted message that mentions the company or references its logo in a context that has not been preapproved by the company’s PR office. What happens as governments themselves awaken to the idea of preemptively stopping all private communications they dislike? It would take but a simple court order for a country to forcibly compel Facebook to add an additional set of content filters for its citizens to ban them from sending or receiving messages disliked by that government. A repressive regime could ban all conversation about democracy or rights and all criticism of the government. A country where being LBGT carries the death sentence could ban all mentions of LBGT culture. A government working with Facebook to ban all terrorist content within its borders could easily utilize its same national security laws to force the company to ban all pro-democracy messaging it views as a threat to its existence. Perhaps most troubling of all, however, is a question raised by one of Facebook’s engineers in the video. She notes that when content moderation is performed directly on users’ devices without any data being transferred to Facebook, the company has no way of knowing when violations occur or what the violating content was. If a user attempts to send a rules-violating post via encrypted WhatsApp message, the on-phone AI content moderation algorithm would flag the post and prevent it from being sent. However, Facebook itself would not have a record that the particular user attempted to post a piece of violating content and won’t have a copy of that piece of content to help fine-tune its algorithms over time. The engineer raises the question of how Facebook might receive alerts of attempts to post banned messages and how it could receive a copy of that content but moves on without answering that question. This raises the troubling specter that Facebook’s on-phone moderation algorithms might ultimately be designed to send an alert back to the company every time they block a piece of content, along with a copy of the offending message. Making matters worse, earlier this year the company clarified that it views any user that installs its app on their phone as granting the company the legal right not only to track their realtime location through their phone’s GPS, but more disturbingly, to access their camera, microphone and non-uploaded photos residing on the phone for any purpose. The terms governing the company’s access to our phones are extraordinarily broad, with the company specifically calling attention to this line in its Data Policy that it has the right to access “information you allow us to receive through device settings you turn on, such as access to your GPS location, camera or photos” and use that information to “to verify accounts and activity, combat harmful conduct, detect and prevent spam and other bad experiences, maintain the integrity of our Products, and promote safety and security on and off of Facebook Products … [including] investigate suspicious activity or violations of our terms or policies, or to detect when someone needs help.” Facebook acknowledged earlier this year to secretly tracking the realtime locations of users it deems a threat to the company without their knowledge or permission. It is not hard to imagine the company going a step further and turning its two billion users’ devices into a global surveillance network beyond the wildest dreams of the world’s most repressive governments. Edge AI is the key to that Orwellian vision. Today if Facebook wanted to scan all of the photos on a user’s phone to see if they’ve ever taken photos of Facebook properties or posed with a weapon or if they mention Facebook in their voice phone calls, Facebook would have to upload all of that data back to its servers, which would likely violate any number of wiretapping laws, not to mention saturate the user’s mobile data quota. Instead, once Facebook’s on-phone content scanning algorithms are sufficiently robust, it no longer has to ship anything off of the device. Image recognition algorithms can scour the user’s private photo gallery and monitor every photo they take, including the ones they never share with anyone. Voice recognition algorithms can monitor their phone calls and flag every mention of Facebook and the context it is mentioned in. The microphone could even be left permanently on to scan the surrounding background environment 24/7, creating a globally distributed network of billions of always-on microphones transcribing global private conversations. In many ways, Facebook’s shift towards a “privacy first” encrypted future might better be described as “moving Facebook’s surveillance to the edge.” The company did not respond to several emails requesting comment. Putting this all together, this is not some imaginary science fiction dystopia of the faraway future. It is the very real Orwellian world Facebook is bringing to us today. The underlying technologies are all here and as the company’s F8 presentation vividly illustrates, it is investing heavily and making rapid strides towards this future. In the end, perhaps the transition towards a “privacy first” future was actually Newspeak for our first step towards 1984.
b63af359141e10d6995ffe3b140a5394
https://www.forbes.com/sites/kalevleetaru/2019/06/07/the-eus-right-to-be-forgotten-shows-once-again-how-little-the-eu-understands-about-the-web/
The EU's 'Right To Be Forgotten' Shows Once Again How Little The EU Understands About The Web
The EU's 'Right To Be Forgotten' Shows Once Again How Little The EU Understands About The Web As governments around the world seek greater influence over the Web, the European Union has emerged as a model of legislative intervention, with efforts from GDPR to the Right to be Forgotten to new efforts to allow EU lawmakers to censor international criticism of themselves. GDPR has backfired spectacularly, stripping away the EU’s previous privacy protections and largely exempting the most dangerous and privacy-invading activities it was touted to address. Yet it is the EU’s efforts to project its censorship powers globally that present the greatest risk to the future of the Web and demonstrate just how little the EU actually understands about how the internet works. The EU’s efforts to legislate its way to global internet censor have been in the news again this week with an advocate general opinion from the EU Court of Justice arguing that the EU should have the right to censor content globally. In reaching for the ability to control what citizens of other countries say outside EU borders, the EU risks setting an extraordinarily dangerous precedent that would permit the world’s repressive regimes to control global speech. This week’s opinion is not the EU’s first foray into attempting to exercise the powers of global censor. Perhaps most famously, the EU has increasingly attempted to expand the impact of its so-called “Right to be Forgotten” protections to include the right to force search engines to remove content globally rather than only within the EU. If the EU were able to force search engines in the United States to remove all content from their US search indexes that EU citizens wished removed, the EU would now have the ability to control what American citizens were able to see. The problem with this power is that in our globalized world, the censorship rights that one country obtains are the censorship rights that all countries obtain. If the EU were granted the legal right to forcibly remove American’s search engine access to content its citizenry disliked, China would also gain the legal right to forcibly remove search access by Americans to anything it disliked, including coverage of Tiananmen Square. While the Right to be Forgotten applies narrowly to private citizens being permitted to remove selected content regarding themselves, its power comes from its enforcement by the governments of the EU. If a private citizen demands under the Right to be Forgotten that a search engine block access to certain content and the search engine refuses, the citizen can turn to the government to legally compel that request. In short, the Right to be Forgotten is exercised by citizens but is enforced through powers held by the government to compel search engines to comply with those requests. Put another way, censorship power flows from government to the citizenry but ultimately rests with and is enforced by the government. While those powers are today narrowly exercised on behalf of citizens, they are ultimately held by the government, which could over time seek to exercise them more broadly, especially turning to the courts for expansions relating to urgent national security matters. Most importantly, once a government establishes international legal precedent, all governments worldwide have a basis upon which to require similar concessions. The EU government may seek new international censorship powers to exercise on behalf of its own citizenry, but if it is successful, other governments that acquire such powers may exercise them instead on behalf of the state. If the EU gains international censorship powers to use on behalf of its citizens, China, Russia, North Korea, Iran and other states would also gain international censorship powers to use on behalf of their own governments. This would mean that China could force search engines to remove all references to Tiananmen Square globally, while Russia could demand that all content critical of its government be stripped from access within the EU. Yet the EU’s focus on citizen requests establishes the precedent that the state’s interests need not be represented in a censorship request. In fact, a citizen’s removal request may nothing to do with their government at all. Why does this matter? It matters because in the hands of a repressive state it would afford them the right to censor conversation in other nations as a way to conduct information warfare and foreign influence. Russia, for example, could utilize such powers to force search engines to remove any content critical of Brexit from within the EU, including that published by EU governments. Similarly, Russia could force search engines to remove within the EU any content that presents a positive image of NATO. Once governments have the right to dictate what other countries are permitted to see and say, it is almost a given that some of those governments will turn to those powers to censor speech not only critical of themselves but to intervene in the domestic affairs of allies and adversaries alike. If the EU can force search engines, social media companies and other Web companies in other countries to censor speech they disagree with, so too can China, Russia, North Korea, Iran censor speech in the EU they disagree with. Today search engines and social platforms are able to resist such demands by pointing out that no nation has such powers. If the EU succeeds in its quest, however, it will be only a matter of time before Web companies are forced to offer those same powers to other nations. This raises the question of why the EU believes it should pursue these powers even with the knowledge of how they will ultimately empower repressive governments to silence and suppress speech within the EU. The EU must surely recognize that if it acquires the power to overrule the informational sovereignty of other nations it will similarly be ceding its own sovereignty and placing its own citizens at risk. To gain greater insight into how the EU has weighed this tradeoff, the EU Commission was asked whether it would support China’s right to force search engines to remove all content related to Tiananmen Square from access within the EU, as well as Russia’s right to censor all content within the EU critical of its government. The Commission was also asked whether it would support Russia’s right to force search engines to remove all anti-Brexit content from access within the EU in order to skew public opinion right before a major vote or negotiation. Finally, if the EU did not support these rights, the Commission was asked why it felt only the EU should have the right to remove content globally, while other nations should not be granted that right and why it believed this was a tractable request in a globalized world. In case the Commission’s response might be that the Right to be Forgotten applied in the EU only to citizen requests, not state requests, it was noted that governments like China and Russia would be unlikely to limit their censorship powers exclusively to citizen requests and would be far more likely to wield those powers for state needs. Unsurprisingly, despite this clarification, the EU Commission still responded by emphasizing that in its current form the Right to be Forgotten can only be exercised by individuals. When it was again reminded that other governments might not limit their global censorship powers in this way, the Commission declined to comment further. Asked again whether irrespective of the EU’s definition of the Right to be Forgotten, if it would support these other states gaining such censorship powers, the Commission again declined to comment. Put another way, the EU Commission sees its efforts to gain global censorship powers narrowly through its own needs without understanding that in a globalized world, the powers of one government over the Web become the powers of all governments over the Web. The EU is not alone in its failure to understand how the modern Web functions, but the danger is that because of its misunderstanding of how the Web works and the technological and legal frameworks that make it operate, the EU is aggressively working towards granting the world’s repressive regimes the right to control the information landscape across the world. Putting this all together, the EU’s failure to understand how the Web works and the unintended consequences of its actions reminds us about the grave dangers when non-technical politicians seek to regulate complex technologies in a globalized world. In the end, the EU shows us once again how little it understands about the modern Web. Most importantly, it shows us that the end of free speech online will come not from the world’s dictatorships, but from the efforts of the EU.
be7c9866f988df964f8bca044e8ef6a0
https://www.forbes.com/sites/kalevleetaru/2019/06/09/comparing-googles-ai-speech-recognition-to-human-captioning-for-television-news/
Comparing Google's AI Speech Recognition To Human Captioning For Television News
Comparing Google's AI Speech Recognition To Human Captioning For Television News Most television stations still rely on human transcription to generate the closed captioning for their live broadcasts. Yet even with the benefit of human fluency, this captioning can vary wildly in quality, even within the same broadcast, from a nearly flawless rendition to near-gibberish. Even the best human captioning often skips over words during fast speech or misspells complex or lengthy names. At the same time, automatic speech recognition has historically struggled to achieve sufficient accuracy to entirely replace human transcription. Using a week of television news from the Internet Archive’s Television News Archive, how does the station-provided primarily human-created closed captioning compare with machine-generated transcripts generated by Google’s Cloud Speech-to-Text API? Automated high-quality captioning of live video represents one of the holy grails of machine speech recognition. While machine captioning systems have improved dramatically over the years, there has still been a substantial gap holding them back from fully matching human accuracy. This raises the question of whether the latest generation of video-optimized speech recognition models can finally achieve near-human fluency. Google’s Cloud Speech-to-Text API offers several different recognition models, including one specifically tuned for video transcription. This raises the question of how well this API might perform on the chaotic rapid-fire environment of television news that can switch from studio news reading to on-scene reporting to large panels of experts talking over one another to fast-talking advertisements. Using the station-provided captioning as a baseline, how much different is the machine transcript? To explore what this might look like, CNN, MSNBC and Fox News and the morning and evening broadcasts of San Francisco affiliates KGO (ABC), KPIX (CBS), KNTV (NBC) and KQED (PBS) from April 15 to April 22, 2019, totaling 812 hours of television news, were analyzed using Google’s Cloud Speech-to-Text transcription API with all of its features enabled. To test how human captioning versus automated transcription might affect machine analysis of the resulting text, both were processed through Google’s Natural Language API for entity extraction. Google’s Natural Language API identified an entity every 6.97 seconds in the automated transcripts, but only one entity every 11.62 seconds in the station-provided captioning. The graph below compares the average number of seconds per entity across the seven stations between the automated transcripts and the station-provided captioning. The average seconds per entity by station between Speech-to-Text and closed captioning. Kalev Leetaru Immediately noticeable is that the automated captioning consistently produces a greater density of recognized entities compared with the station-provided captioning. This ranges from 1.4 times more for Fox News to 2.2 times more for PBS. The primary reason for this appears to be that the station-provided captioning is entirely uppercase, while the machine transcripts are correctly capitalized, using the linguistic capitalization model built into Google’s Speech-to-Text API. Google’s Natural Language API relies on proper capitalization to correctly identify entities and their textual boundaries and to distinguish proper nouns from ordinary words. The significant difference between the transcript and captioning entities for PBS appears to be due to a particularly high density of typographical errors in the station-provided captioning which both affected entity mentions themselves and sufficiently interrupted the grammatical flow of the transcript such that it impacted the API’s ability to identify entity boundaries. In fact, examining the station-provided captioning word-by-word for each of the stations, the graph above reflects to some degree the level of error in the captioning, with the closer it matches the machine transcript the higher its fidelity and lower its error rate. An even greater driving factor is that captioning typically does not include advertisements, while the machine transcript includes all spoken words. This means that stations devoting a greater proportion of their airtime to ads will show a greater difference. One limitation of this graph is that it shows only the density of entity mentions, not how well they match up between the captioning and transcript. They could have similar number of entities but due to human or machine error the extracted entities could be completely different from one another. To test this, a master histogram of all extracted entities was compiled and the Pearson correlation computed for each station between its captioning entities and transcript entities, seen in the graph below. Only entities that did not include a number and appeared at least five times across the combined airtime of the seven stations were considered. The Pearson correlation between captioning entities and transcript entities. Kalev Leetaru Across all seven stations the total correlation was r=0.95, ranging from 0.96 for CNN and MSNBC and 0.95 for Fox News down through 0.75 for CBS. Interestingly, the three national stations have the highest correlations and the four network stations the lowest. One possible explanation is that since the network stations included only the morning and evening broadcasts, the advertising airtime for these stations constituted a larger portion of the total monitored volume. Comparing the captioning and transcripts through their API-extracted entities offers a glimpse at how their differences can affect downstream machine understanding algorithms. At the same time, capitalization and typographical errors can have a profound effect on today’s textual deep learning systems, as seen in the results above. What might the same comparisons look like when applied to the text itself? The chart below shows the total number of unique words by station illustrating that for most stations there is a similar vocabulary between the machine transcript and primarily human-derived closed captioning. The only outlier is PBS, whose captioning has 1.6 times more unique words than the machine transcript. A closer inspection reveals nearly all of these to be typographical errors, again reflecting the higher error rate of its original captioning. The number of distinct words per station between captioning and transcript. Kalev Leetaru Looking at the total number of uttered words, the graph below shows that for all stations there were more distinct words recorded in the transcript than in the closed captioning, primarily reflecting the uncaptioned advertising airtime. This is one of the reasons that PBS has nearly equal spoken word counts. The much larger number of words on CNN, MSNBC and Fox News reflects that their entire airtime for the week was examined here, while the four network stations only included their morning and evening broadcasts. The total number of words per station between captioning and transcript. Kalev Leetaru The graph below shows the Pearson correlation of the captioning and transcript vocabularies. Only words that did not include a number and appeared at least five times across the combined airtime of the seven stations were considered, leaving a total of 27,876 distinct words. All seven stations had correlations higher than 0.989, indicating that despite their differences, the total vocabulary use of the captioning and transcripts were extremely similar. The Pearson correlation between captioning vocabulary and transcript vocabulary. Kalev Leetaru Despite their similar vocabularies, the real test of how similar the station-provided captioning and the machine-generated transcripts are is to perform a “diff” between the two. For each broadcast both the captioning and machine transcript were converted to uppercase and all non-ASCII letters were converted to spaces. The resulting text was split into words on space boundaries and the two files run through the standard Linux diff utility. The total number of words flagged as having changed were divided by the total number of compared words yielding a change density. In total, the captioning and transcripts matched for around 63% of the total words, with the stations falling in a fairly narrow band from 55% similar (CBS) to 68% similar (PBS and CNN). Total similarity between captioning and transcripts as a percentage of words and computed by Linux... [+] “diff” utility. Kalev Leetaru These percentages seem unexpectedly low given the quality of modern speech recognition. A closer inspection of the differences explains why: the machine transcript typically offers a more faithful and accurate rendition of what was actually said than the station-provided captioning, which is typically transcribed by a human. For example, the station-provided captioning for this CNN broadcast introduces a set of panelists as “Dana Bash, Crime and Justice Reporter Shimon Prokupecz and Evan Perez.” In contrast, the machine-generated transcript has the actual wording as spoken on the air: “CNN’s Chief Political Correspondent Dana Bash, CNN Crime and Justice Reporter Shimon Prokupecz and CNN Senior Justice Correspondent Evan Perez” which includes their full titles. Similarly, the very next minute of that same broadcast includes several differences, including the captioning’s “guide post” versus the machine’s correct transcription of the plural “guideposts.” Similarly, while the captioning includes the phrase “that he told me” the machine transcript correctly records that the panelist actually repeated herself, stating “that he that he told me.” Neither captioning nor transcripts typically record speech disfluency, with Google’s API designed to ignore fillers like “um” and “er.” This suggests that a major driving force behind this low agreement between human and mechanized transcription may be the much higher fidelity of the machine in recording what was actually said word-for-word. An even greater influence is the fact that the machine transcripts include advertisements, while the captioning does not. This suggests that a better comparison would be to exclude any differences involving added text found only in the machine transcript. This still counts words from the captioning that are missing in the transcript and words that are present in both but spelled differently. This results in the graph below, showing an average agreement of 92%, ranging from 87% for PBS to 93% for CNN and MSNBC. Total similarity between captioning and transcripts as a percentage of words and computed by Linux... [+] “diff” utility, excluding words that appeared only in the transcript and not in the captioning. Kalev Leetaru This makes it clear that the majority of differences between the two are the addition of advertising narration in the machine transcript and the higher fidelity of the machine in capturing details such as repetition and full spoken titles. Looking more closely at the remaining differences, many are actually typographical errors in the human-produced captioning. Some remaining differences revolve around certain newsreader and panelist names that the machine attempted to spell phonetically and panelist mispronunciation of names like Mueller as “mother.” Thus, the actual alignment between human and machine is much greater than 92%. Most importantly, the high degree of error in the human-generated captioning means it is not technically a gold standard. Thus, the 8% disagreement rate between the human and machine does not mean the machine has an 8% error rate. A considerable portion of that error actually resides in the human captioning, rather than the machine transcript. Google’s Speech-to-Text API actually supports the use of external domain adaptation dictionaries that can provide correct spellings of specific terminology or proper names. In future, the full list of each station's newsreaders and anchors, as well as the names of major figures currently in the news could all be added to these dictionaries to ensure their names are correctly recognized and spelled by the API. Putting this all together, automated speech recognition has improved dramatically over the last few years. Comparing the largely human-generated closed captioning of a week of television news against Google’s completely automated transcripts generated by its off-the-shelf Cloud Speech-to-Text API, the two are more than 92% similar after accounting for the inclusion of advertising and the higher fidelity of the machine transcript. In fact, the machine actually beats the human-produced captioning along almost every dimension, from its higher fidelity to what was actually said, its lower error rate, lack of typographical mistakes, proper capitalization and higher quality. While the tests here utilized Google’s API without any customization, the creation of a simplistic dictionary of common names appearing on each station and major names in the news at the moment would fix many of the remaining errors. The machine transcripts still contain errors, but we are now at a point where fully automated transcripts can rival the accuracy of real-time human transcription for television news content. As these models continue to improve, it will only be a matter of time before machine transcription becomes more accurate and robust than human keying. In the end, these graphs show us just how far the AI revolution has come. I’d like to thank the Internet Archive and its Television News Archive, especially its Director Roger Macdonald. I’d like to thank Google for the use of its cloud, including its Video AI, Vision AI, Speech-to-Text and Natural Language APIs and their associated teams for their guidance.
895412d359f00b3fd0a611ce1a505eb2
https://www.forbes.com/sites/kalevleetaru/2019/06/23/automated-speech-recognition-yields-more-consistent-results-than-human-captioning/
Automated Speech Recognition Yields More Consistent Results Than Human Captioning
Automated Speech Recognition Yields More Consistent Results Than Human Captioning When discussing the rise of deep learning, the accuracy of automated approaches is typically compared to the gold standard of flawless human output. In reality, real-world human performance is actually quite poor at the kinds of tasks typically being considered for AI automation. Cataloging imagery, reviewing videos and transcribing are all tasks where humans have the potential for very high accuracy but the reality of their long repetitive mind-numbing hours sitting in front of the screen means human accuracy fades rapidly and can vary dramatically from day to day and even hour to hour. For all their accuracy issues, automated systems promise far more consistent results. Speech recognition is an area where humans at their best still typically outperform machines. In real-life real-time transcription tasks like generating closed captioning for television news, however, it turns out that commercially available systems like Google’s Speech-to-Text API are actually almost as accurate as their human counterparts and are far more faithful in their renditions of what was said. Look more closely at the captioning of some stations and an interesting pattern emerges: the quality of the human captioning can vary from day to day and even over the course of a single day. Real-time transcription is typically outsourced to third party companies who employ contractors to type up what they hear. Quality can vary dramatically between contractors and even the same individual might perform better in the morning when they are more rested or just have a bad day. Different transcriptionists can exhibit different kinds of errors, meaning the same word can be spelled correctly for part of the day and exhibit far more typographical errors during the rest of the day. Some stations tape their morning shows and rebroadcast them as-is directly from tape in the afternoon, but may choose to retranscribe them in the afternoon on the off chance that breaking news forces the station to interrupt the taped show. This means that the exact same show may have different typographical errors in the afternoon than it did in the morning. In short, humans are imperfect and the variety of individuals involved in creating the transcripts for real-time television can lead to a highly variable transcription error rate. In contrast, automated speech to text systems have perfect consistency. The same tool run on the same video again and again will result in the same output each time. That output may have errors in it, but those errors will be the same each time it is run. This error consistency means that many misspellings can be fixed simply by adding the correct spelling to the tool’s custom dictionary. In other cases, more complex classes of errors can be fixed over time with algorithmic updates. Human performance can be improved over time with training, feedback and experience, but each individual has a maximum accuracy they typically perform at and that accuracy can vary from day to day and hour to hour. Scaling large projects across teams of humans will result in highly uneven accuracy. In contrast, an unlimited number of machines can be launched to process a flood of incoming content, with every instance performing precisely the same as its neighbors and yielding the exact same result on the same video no matter how many times it sees it. Putting this all together, the accuracy of deep learning systems is often compared to a theoretical utopian world of perfect humans performing perfectly over an indefinite periodof time . The reality is that imperfect humans yield error-prone results whose accuracy can vary wildly over the course of even a single day. Machines at their best may not yet achieve the accuracy of humans at their best, but the reality is that machines at their ordinary can today typically outperform humans at their ordinary and with the added benefit of perfect consistency. In the end, instead of comparing machines against an idealized benchmark that simply doesn’t exist, we should recognize that machines are increasingly reaching the point where they can match or even exceed the very imperfect human results we rely upon today.
3004f53b399a22a8414865ae0588a408
https://www.forbes.com/sites/kalevleetaru/2019/07/09/how-were-social-media-platforms-so-unprepared-for-fake-news-and-foreign-influence/
How Were Social Media Platforms So Unprepared For 'Fake News' And Foreign Influence?
How Were Social Media Platforms So Unprepared For 'Fake News' And Foreign Influence? Perhaps the most existential question of the digital age is just how social media platforms were caught so unprepared for the rise of “fake news,” misinformation, disinformation, digital falsehoods and foreign influence. How could it be that the companies innovating our future could fail so miserably to foresee all of the ways it could go wrong? Why was Silicon Valley so fixated on building tools to let the world speak and so blind to all of the ways in which those tools would unleash the world’s hate and toxicity to overpower all else? How could it be that the companies that helped build the modern Web could entirely miss foreign adversaries repurposing their inventions into the ultimate tools of democratic destruction? The answer is that despite wielding almost unprecedented power over our modern world, Silicon Valley operates largely in a vacuum, blinded by its hubris and fixated on building without ever stopping to ask how those tools might go wrong. Is there any hope for the future? It was just over half a decade ago that I sat in a room with senior policy representatives of several of the large social media companies as they were warned by organization after organization about the myriad ways in which repressive governments around the world were co-opting their tools to silence internal dissent and to influence debate in other countries, such as repurposing DCMA to remove foreign criticism. Warning after warning involved the way in which adversarial governments were increasingly utilizing social media to interfere in the domestic affairs and debates of other countries. I myself warned that it was only a matter of time before a foreign adversary like Russia or China would adapt their foreign influence playbook to conduct a massive-scale influence campaign that could undermine or even impact the US election. The response from the companies in the room? Complete and utter dismissal. The companies saw the threats they were hearing primarily through the lens of cyberattacks, which they felt more than prepared to defend against. Yet despite having spent hours listening to all of the ways foreign governments were already repurposing their platforms without utilizing any form of cyber activity, simply by harnessing armies of human operators and bots to spread falsehoods and influence overseas debate, the companies simply dismissed out of hand the idea that a foreign government could ever use their platforms to influence the domestic debate or elections in the United States. As one of the companies argued, it employed the greatest minds in the world and had the greatest alerting systems in the world and so had no concerns whatsoever that it could ever be utilized for foreign influence campaigns. Yet even the warnings of governments observing these very attacks as they began, fell on deaf ears. In 2015, in meetings with representatives of several European nations on the growing exploitation of social media by adversarial governments, a common refrain was Silicon Valley’s refusal to listen to the myriad warnings from Europe about Russia’s misuse of their platforms and the kinds of influence operations Europe was already deep in the midst of combating. When I spoke again in December 2016 in Europe at a gathering of the continent’s military and intelligence officials on the topic of information warfare and foreign influence in the digital era, it was remarkable how many governments noted that they had repeatedly warned the social media companies with a stream of alerts and real-time observations of the influence operations and evolving tactics they were seeing, but again received little interest from a Silicon Valley whose hubris would not permit it to accept that its tools might be misused for harm. Of course, it was just the month prior that Mark Zuckerberg had famously proclaimed that “Personally I think the idea that fake news on Facebook, of which it’s a very small amount of the content, influenced the election in any way is a pretty crazy idea” and in doing so popularized the term “fake news.” Silicon Valley has always been extraordinarily insular, defined by its pursuit not of big ideas to solve the world’s problems, but of profit-minded pursuits to ease the lives of the well-to-do. The Valley isn’t focused on solving world hunger or eliminating poverty, it is focused on getting your fast food to you faster, producing internet-connected organic juice machines and ensuring a steady supply of funny cat videos. Most importantly, the Valley is populated largely by technologists who have little understanding of the world and how it functions beyond the computer code on their screens. While social media companies have made strides in expanding their ranks with policy experts that have more experience in governmental misuse of their platforms and are engaging slightly more cooperatively with governmental experts, there is still a fundamental disconnect between the vast expertise of the world’s democratic governments that are intimately aware of how social platforms are being misused for harm and the willingness of those platforms to listen and make the necessary changes to how their platforms function. After all, many of the necessary changes could negatively impact their economic models, given how much they profit monetarily from the spread of digital falsehoods. Putting this all together, social media platforms had every ability to foresee the rise of “fake news,” misinformation, disinformation, digital falsehoods and foreign influence. In fact, many of them were being deluged with warnings and tactical reports from governments on the front lines of combating those abuses. Instead, their hubris would not allow them to admit that their creations for good had been repurposed for evil or that they could have missed such highly visible misuse. Unfortunately, things have gotten little better. Beneath their new policies, war rooms and cyber and AI hires, the reality is that Silicon Valley is still at the end of the day ruled by technologists guided by technological determinism and who lack the adult supervision and diversity of viewpoints to understand the existential ways in which society shapes their platforms and harnesses them for bad as well as for good. In the end, until Silicon Valley grows up out of Neverland and finally starts listening, nothing will change.
a5039754093bc91158d98ea862a7b030
https://www.forbes.com/sites/kalevleetaru/2019/08/07/how-fast-do-people-speak-on-television-news/?sh=1e8ce76f47fd
How Fast Do People Speak On Television News?
How Fast Do People Speak On Television News? Television news is typically associated with staid newscasters reading from teleprompters at a slow, steady, measured pace in a quiet newsroom alternating with reporters live on the scene of breaking news, speaking more quickly and interspersed with fast-talking witnesses. What can a decade of closed captioning tell us about how fast people really speak when it comes to television news? Over the past decade, the Internet Archive’s Television News Archive has monitored more than 1.6 million broadcasts from 170 stations totaling 4.9 billion seconds of airtime, of which 3.9 billion seconds (79%) was captioned, containing a total of 9.2 billion total words of captioning. Not all stations were monitored for the entire decade, with stations coming and going over time, but the end result is an incredibly unique look at television news in the United States. Such an extraordinarily rich archive covering such a long period of time offers unprecedented opportunities to explore macro-level patterns in how we understand the world around us. Using the inventory files for this massive archive, which record the list of every monitored show, its total airtime, captioned airtime and total number of captioned and unique captioned words, it is relatively straightforward to explore macro-level patterns in how those stations communicate the world to their viewers. The question of how fast television anchors speak is a critical one in an era where stations are increasingly exploring the use of automated captioning. The timeline below shows the average number of words spoken per second on CNN by day from July 2, 2009 to June 30, 2019, looking only at its captioned airtime. Over the past decade this rate has remained remarkably steady, decreasing ever so slightly through early 2015 and slowly edging back up ever since. Overall this shift resulted in just a difference of around five words per minute between its 2009 and 2019 averages and its 2014 average, illustrating how remarkably stable speaking rates are. In all, CNN’s average and median speaking rates over the past decade are both 2.57, working out to around 154 words per minute. Words spoken per second on CNN 2009-2019 using data from the Internet Archive’s Television News... [+] Archive Kalev Leetaru Looking across all 170 stations, there is considerable difference across stations. WPSG (The CS Television Network) has the fastest average speaking rate of 3.09 words per second, while KXRM (Colorado Springs’ Fox affiliate) has the slowest at 1.28 words per second. Other than a small number of outliers, it can be seen that the vast majority of stations speak at around the same rate. The average across all 170 stations is 2.38 words per second, with a median of 2.43 words per second. Average speaking rate by station using data from the Internet Archive’s Television News Archive Kalev Leetaru The results above can be skewed in part by the fact that the Archive only monitored some of these stations for a brief period of time, that some stations have only selections of their programming monitored, the length of the average spoken word and that some stations may provide closed captioning for a greater percentage of their fast-paced rapid-speaking advertisements than others. They can also be impacted by the quality of the underlying captioning. Thus the numbers above represent estimates based on their closed captioning as monitored by the Internet Archive. Simply by transforming a decade of audiovisual television news broadcasts into textual closed captioning, it becomes possible to answer such macro-level questions with just a few lines of code, offering a reminder of the power of modality transformation. In the end, this relatively simplistic analysis offers a brief glimpse into the power the analytics revolution offers for understanding the journalistic world in insightful new ways. I’d like to thank the Internet Archive and its Television News Archive, especially its Director Roger Macdonald and Google for the use of its cloud resources.
f73705389f833889c5a85e4efda0d715
https://www.forbes.com/sites/kalevleetaru/2019/08/21/explainable-ai-could-help-us-audit-ai-startup-claims/
Explainable AI Could Help Us Audit AI Startup Claims
Explainable AI Could Help Us Audit AI Startup Claims Every day I receive a steady stream of pitches from companies and universities large and small touting their latest AI breakthrough. Yet once one looks past the marketing hype and hyperbole and actually tests the tool itself, the results are rarely as glowing as one might hope. In fact, for those on the front lines of applying AI to complex real-world problems, today’s AI solutions are akin to asking toddlers to operate a spacecraft. When putting a new AI tool through its paces, one of the most common outcomes is that the algorithm has latched onto an extraordinarily fragile and inaccurate representation of its training data. As “explainable AI” approaches become steadily more robust, what if companies were asked to subject their AI creations to algorithmic audits and report the results? To the press and public, today’s AI solutions are nothing sort of magic. They are living silicon intelligences that can absorb the world around them, learn its patterns and wield them with superhuman precision and accuracy. To those who actually use them in complex real-world scenarios each day, they are brittle and temperamental toddlers who oscillate wildly from remarkable accuracy to gibberish meltdown without warning and whose mathematical veneer masks a chaotic mess of alchemy and manual interventions. In fact, even some major driverless cars still rely upon hand-coded rules for some of their most mission-critical tasks, reminding us that even the most vocal proponents of deep learning still acknowledge its grave limitations. In many domains, the accuracy of deep learning solutions still pales in comparison to traditional Naïve Bayesian and hand-coded rulesets. In our rush to crown deep learning as the defacto technology of the modern era, we too often forget the cardinal rule of checking whether something simpler does the job just as well or even better. Few AI startups publicly compare their deep learning solutions to existing non-neural solutions. Even the academic literature typically compares new deep learning approaches against previous neural approaches rather than existing non-neural solutions. When benchmarks are provided, they typically focus on extreme edge cases that showcase the new technology at its best, rather than the actual real-world content that will constitute 99% of what the algorithm will be used on and upon which it may actually perform far worse than classical approaches. Yet even in cases where deep learning solutions perform markedly better than classical approaches or domains like image understanding where there are few non-neural solutions, the lack of visibility into what the algorithm has learned about its training data clouds any understanding of how robust it may be. Explainable AI, coupled with more stringent benchmarking, offers a solution to these challenges. Before investing in a new AI technology or purchasing a product for use, companies should focus more heavily on benchmarking it against classical approaches rather than only against its neural peers and emphasize the use of their own data rather than standard benchmarking datasets. Most importantly, companies should require AI developers to provide the results of standard explainable AI tests for their algorithms, documenting what variables they are most reliant upon and the stability and actual predictiveness of those variables. Companies should also provide small sample or artificial datasets that mimic the consistency of their own data and request stability metrics for each test point, to understand how much slight changes in the attributes of each input would cause it to yield different results, to understand how robust the algorithm is on their own data. Understanding what an algorithm sees is especially important in regulated industries where an algorithm that focuses on the wrong variable could have substantial legal and societal implications. An AI company building a mortgage or rental evaluation system might go to great lengths to ensure their algorithm does not have any inputs related to race. Despite their best efforts, the system may eventually learn to infer race based on an unexpected combination of inputs that were not previously known to encode race. In turn, a major apartment rental company that applies the algorithm and unknowingly systematically rejects selected applicants can face enormous legal liabilities, despite having conducted what it believed to be due diligence in certifying that race was not used as a factor in its evaluation process. Had the company subjected the algorithm to a series of explainable AI tests it could have uncovered this bias at the beginning. In the end, explainable AI is beginning to shine a bit of light into the opaque black box workings of the deep learning revolution. Companies should take advantage of these new insights to more thoroughly evaluate the technologies they invest in and apply to their businesses.
25a9dfae2d3b1b15017bdb7a41f92cd8
https://www.forbes.com/sites/kalevleetaru/2019/08/25/so-much-of-our-lives-have-been-exposed-through-breaches-we-have-no-privacy-left/
So Much Of Our Lives Have Been Exposed Through Breaches We Have No Privacy Left
So Much Of Our Lives Have Been Exposed Through Breaches We Have No Privacy Left It is ironic that we still talk about the quaint notion of “privacy” in a digital world in which we barter away our privacy for the privilege of being able to waste our days watching an endless parade of funny cat videos. Yet while we focus on the privacy risk of social media sites and online behavioral and interest tracking, the sad reality is that our personal information is being hemorrhaged every day by data breaches of both online and offline companies and governments over which we have no control. So much personal data has been released by these companies that one must ask whether privacy even exists anymore? It is a sad commentary on cybersecurity that breaches have now become so common that they rarely even make the news anymore except when a new record is set in terms of data lost. We are so accepting of the inevitable loss of our personal data that we no longer even blink when yet another business sends us a letter notifying us that it accidentally handed over all of our personal data to yet another hacker. Breaches have become so common now that the media no longer even deems them worthy of attention despite ever-larger losses of personal information. The timeline below shows the percentage of combined airtime on CNN, MSNBC and Fox News July 2009 to present that mentioned “cyberattack” or “hacked” or “data breach" using data from the Internet Archive's Television News Archive processed by the GDELT Project. Combined airtime on CNN, MSNBC and Fox News July 2009 to present mentioning “cyberattack” or... [+] “hacked” or “data breach.” Kalev Leetaru Interest steadily increases from 2013 to 2016, but has declined sharply year over year since then, showing that despite an almost daily drumbeat of major breaches, breaches don’t even warrant a mention on the news anymore. Similarly, the public isn’t even searching about breaches anymore. The timeline below shows US search interest in the word “hacked” (in blue) and “breach” (in orange) according to Google Trends from 2004 to present. Google Trends relative search interest for “hacked” and “breach.” Kalev Leetaru The word “hacked” really took off from November 2009 through May 2011, but has declined almost linearly since July 2012 other than a peak in September 2014 with the Sony hack. The word “breach” has slowly increased in search interest, but attracts only a small amount of background searches other than spikes around specific high-profile incidents. While these trends may partially reflect changing language used to describe data breaches, the two graphs show that both media coverage and public interest in breaches is steadily declining as the public simply becomes acclimatized to the idea of a privacy-less world. In many ways our relentless focus on social media privacy and behavioral and interest-based profiling is misplaced given that far more intimate data is released about us every day by the companies and government agencies we do business with or which acquire information about us without our knowledge. In the end, perhaps we should just accept that in a world filled with breaches, privacy is nothing more than a distant memory from a time long past.
905c19797cb641dbd05744a590e0708e
https://www.forbes.com/sites/kandywong/2014/07/02/nothing-less-than-democracy-a-voice-from-hong-kong/
Nothing Less Than Democracy: A Voice From Hong Kong
Nothing Less Than Democracy: A Voice From Hong Kong At 6 a.m. this morning, I finally returned home exhausted after attending what authorities here have labeled an "illegal" protest. I was there to act as a neutral observer of the protest march that had started some 13 hours earlier, when hundreds of thousands of people took to the streets to show their support for democracy. Organizers said that 510,000 people (versus 98,600 people counted by the police) took part in this year's march, which is an amazing number. A new record for public rallies like this one since 2003, when scenes of 500,000 people taking to the streets was splashed across all the major publications. Although the huge size of this year's crowd may have taken many by surprise, even more surprising was that over 500 of them were willing to stay behind and risk arrest by the police for staging an overnight sit-in. They were there as a rehearsal for future demonstrations if the government doesn't fulfill its pledge of democratic elections in 2017. Organizers of the sit-in said it would end at 8 a.m. The police actually began clearing the area at around 3 a.m. and the process was completed until 8:30a.m. The police said protesters may face charges at a later date. A scene of the July1 2014 March in Hong Kong. Credit: Fung Ka Keung, a participant of the March. Just days before the demonstration, more than 790,000 residents have participated in an "unofficial" referendum that was not organized by the Hong Kong or Chinese governments. They were voting on different proposals for choosing the city's next top leader in 2017. For a poll that doesn't promise a legally binding outcome, 790,000 participants is an awesome figure. The mouthpiece media of the Chinese government lambasted the referendum as "illegal." But with more than one-tenths of the city now expressing their views, it’s a public opinion that shouldn't be ignored. Over a year ago, I had been struggling to decide whether I want to continue living in New York after completing my master's degree there or return to Hong Kong, my hometown. Mostly due to reasons related to my work and with great reluctance, I came back. But as I look back now, I have not regrets whatsoever. I'm thankful of the opportunity to be bearing witness of the events unfolding here now. Hong Kong’s political environment has reached a boiling point as the 2017 election is rapidly approaching. It was guaranteed in the Basic Law – a sort of mini-constitution of Hong Kong - that residents would have the right to pick their future leaders for the coming term under the “one country, two systems” principle set before the 1997 reunification. In the 17 years since the handover, many Hong Kongers have come to expect that democratic elections means an open nomination of candidates that would provide them with options for casting their votes. But recently the Chinese government has been suggesting that only a limited number of candidates who had been prescreened by a nominating committee would be allowed onto the ballot. What’s more, the Chinese government recently issued a white paper to assert its total authority over Hong Kong. Recalling the days when I was in New York, the concept of democracy was well understood by everybody. There was no need to talk about what it actually is. However, such a simple idea suddenly becomes overly complicated here, in what was once a British colony. As the third generation of my family in Hong Kong, I was educated under the British system. As a little girl whose dad supporting our family with his work in the shipping industry, I was taught that Hong Kongers could be proud of the city's independent judiciary, free flow of information and clean government. These have been the cornerstones of the city's success in the past, and we recognize the need to preserve them for future generations. The organizers of Tuesday's protest march led the public with a chant that said: Hong Kongers have to save our home by ourselves. We have to choose our own government. That was a powerful message because many people now feel that their way of life is under threat by short-sighted political leaders willing to sacrifice their rights and freedoms for personal gains. Although democracy has not always been a high priority here in the past, now more than ever, people recognize the need for selecting leaders who will work to safeguard the institutions that led to its success. Like so many other people here, my grandparents came to this city as refugees to escape the civil war in China. Over the years, we developed a solid sense of belongings toward Hong Kong. The struggle that's happening now is not just a matter of abstract political ideas, much more importantly, we’re seeking to defend our identity that is built across several generations. Follow me on Twitter @WongKandy and on Forbes.
493d0e56763530205d03883acb59ff58
https://www.forbes.com/sites/karagoldin/2017/02/01/hint-water-kara-goldin-morning-routine/
How Hint Water's CEO Spends Her First 45 Minutes Every Day
How Hint Water's CEO Spends Her First 45 Minutes Every Day For busy entrepreneurs, creating a morning (or evening) routine can be a great way to instill some consistency in your life, especially if you travel often or are juggling professional and personal commitments. It is rare that I have a full week in Hint, Inc.'s San Francisco headquarters, and some months I’m traveling about 50% of the time. So, for me, starting the day in a routine way is of utmost importance. Moreover, I always start my day with exercise. I live in Marin County, California—a hiker’s paradise—and every morning before coming into our office, I take my dogs on a 45-minute walk or hike through the hills. Kicking off with exercise resets my day after a good night’s sleep, clears my head, and energizes me. Exercise is important to me, so it’s also great to get it out of the way first thing. Coming back from a morning walk with my dogs, I know that I can take on whatever the day brings me—and I don’t have to worry about trying to fit in a workout later on. Watch the video below to hear more about the importance of morning routine. Do you have a morning routine? Share below. The Kara Network is a digital resource for advice and discussion for entrepreneurs, by entrepreneurs. Kara Goldin is the founder and CEO of San Francisco–based Hint, Inc., which produces the leading flavored water with no sweeteners and nothing artificial.
f9a6f4e2134b8e0c441b2dc487dbc39c
https://www.forbes.com/sites/karagoldin/2017/12/14/why-being-an-industry-outsider-can-be-an-advantage/?sh=4b83d16761de
Why Being An 'Industry Outsider' Can Be An Advantage
Why Being An 'Industry Outsider' Can Be An Advantage A unique perspective can often lead to big ideas in a stagnant industry Shutterstock When I founded hint, I was the definition of an “industry outsider.” Having spent my entire career working in tech or media, everything about the beverage industry was foreign to me — and at first, that felt like a massive disadvantage. But over time, I began to view being an outsider as less of a scarlet letter and more of a badge of honor. More importantly, I came to realize that it’s not a disadvantage at all, but rather an advantage. And a big one at that. Advantage #1: You’re more curious When you’re curious, you ask more questions; and when you ask more questions, good things happen. I’ll give you an example. In the very early days of my company, bottlers were basically telling us, “I’m sorry but it’s just not possible to produce this product without preservatives.” My natural response was to ask “Why?” over and over until I got a satisfactory answer, and as it turned out, nobody really had one. In the end, it was possible to produce our product without preservatives, and the only reason we discovered that was because I was a curious outsider in need of answers. Had I been working in the beverage industry my whole life, I probably would have taken the bottlers’ initial answer as gospel and never asked the all-important “Why?” Advantage #2: You think about things differently In order to be a successful entrepreneur, you must think of yourself as a disruptor. And if you’re going to disrupt an existing industry, it’s crucial that you bring a fresh perspective to the table and think about things differently. That’s why being an outsider is so advantageous. As a result of having worked in a different industry, you naturally have a unique approach to solving problems, building a product, hiring a team, etc. Plus, you have a wide array of past experiences to draw on that your competitors — particularly those who have been entrenched in the industry for decades — simply do not. That’s powerful. Advantage #3: Your past career left an imprint on you (even if you don’t realize it) When I worked in tech, not once did I walk into a room and hear people saying, “That’s not possible.” Instead, I’d hear things like, “Who do we need to get in the room to make this possible?” and “I wonder when someone will figure out how to do this.” In the beverage industry, the mood isn’t quite as hopeful. I can’t tell you how many times I’ve heard the phrase, “It can’t be done.” And look, oftentimes the “It can’t be done” people have logic on their side. Maybe it really is impossible. Maybe they’re right. But at the end of the day, I view my approach of questioning everything as a competitive advantage, particularly in an industry that generally lacks innovation. And I have my prior career in tech to thank for that. Way too many people view being an industry outsider as a hurdle to overcome, when in reality, it’s the most powerful weapon in their arsenal. As someone who has successfully transitioned careers, I encourage everyone I meet to keep an open mind when it comes to their career trajectory — and most importantly, to never say no to an opportunity just because they feel like they might be behind the eight-ball (because they’re not). The skills, experiences and fresh perspective you bring from a prior industry may be just what’s needed to shake things up in a new one.
ab7c0197d3dbaf1d8f20df72f01a1e4d
https://www.forbes.com/sites/karagoldin/2018/04/20/why-meetings-on-the-move-should-be-the-new-normal-and-how-to-ensure-theyre-productive/
Why Meetings On The Move Should Be The New Normal (And How To Ensure They're Productive)
Why Meetings On The Move Should Be The New Normal (And How To Ensure They're Productive) Shutterstock Whatever my day involves, I always try to incorporate plenty of exercise. Every morning I wake up early and take my dogs for a long walk. At the office I work at a standing desk. And when I take a phone call or have a one-on-one meeting with someone, I like to do it on my feet. Steve Jobs was a big fan of walking meetings, especially if it was the first time he was meeting someone. His biographer, Walter Isaacson, recalls Jobs insisting their first meeting about the book happen on foot. The writer soon realized that “taking a long walk was his preferred way to have a serious conversation.” Jobs’ attitude to too much sitting continues at Apple. Last year, the company radically redesigned its office chairs. Typically, office chairs are engineered to support hours of sitting at your desk. Apple’s new Pacific Chair is focused on blending in with the less formal interior design of the modern office. It acknowledges that laptops and mobile devices mean modern workers are no longer tied to their desk. Many businesses now offer break-out spaces and casual meeting rooms filled with bean bags and designer couches as a way of encouraging collaboration. But these alternatives still involve sitting down. Even the short, stand-up meetings used by agile teams are based on the idea that no seating encourages getting through the agenda quickly so everyone can sit down again. People are increasingly aware of the benefits of short breaks from our desks and we like taking the stairs so our wearable devices can count those extra steps. But despite evidence that walking increases creative output by 60 percent, the idea of actually working on your feet (or on the go) still hasn’t taken hold. I believe a lot of this resistance is due to our over-reliance on meeting accessories. We can’t imagine being productive without PowerPoint, Evernote, whiteboards and Post-its. My experience is that walking and talking with someone are often my most effective interactions. Here are five tips for having more productive meetings, while incorporating exercise into your working day. Stick to one-on-ones When you’re walking side-by-side, it’s harder to direct conversation and attention to multiple people. That’s why one-on-ones work better for walking meetings. I find that they’re great for catching up with someone from my team or discussing an issue with a partner or vendor. It’s also an interesting way to interview a potential new hire, especially if you want to see how they handle unexpected situations. Plan a route If your meeting is scheduled for 30 minutes, plan a walk that will take that long. Just remember to take into account the fitness of the other person. It’s ok to include a 10-minute rest stop on a park bench halfway round if needed. Make sure you know exactly where you’re going. You don’t want to be distracted by checking directions or getting lost. Review relevant documents in advance At my company, we keep our business processes as simple as our products. But if elaborate presentations and briefing materials are necessary, I like Amazon CEO Jeff Bezos’s idea that relevant documents should be shared and reviewed ahead of the meeting. This is crucial for walking meetings so be clear that you expect documents to be shared in advance and build time into your schedule for reviewing them. Embrace silences Meetings often get sidetracked because people end up talking just for the sake of it. When you’re gathered around a conference table, it feels wrong if nobody is speaking. But silence is often useful. It’s time to think, to consider what’s been said instead of just responding for the sake of it. Silences are more natural when you’re walking because you’re both moving. Embrace them as an opportunity for promoting thoughtfulness. Follow-up with notes immediately When you return from your walk, make notes before you get distracted by your inbox, phone and everything else going on in your day. You won’t have scribbled post-its or an Evernote to look back later so this is your only chance to capture any major takeaways. It’s helpful to call out these key points during the meeting. Simply saying “let’s make sure we remember that” is a great way of ensuring something sticks in your mind. It also encourages more active listening during the meeting. Walking meetings boost productivity and creative thinking. But even if you’re having one of those torturous, going-nowhere discussions, at least you’re not stuck in a conference room wondering whether you should just walk out. You’re out walking and that’s never a waste of time.
c561d8ece09d3f3ccc4a1f537a4f99eb
https://www.forbes.com/sites/karanmehandru/2019/10/22/slack-and-zoom-prove-the-future-of-work-is-agile/
Slack And Zoom Have Proven That The Future Of Work Is Agile
Slack And Zoom Have Proven That The Future Of Work Is Agile It’s unsurprising for Google, Facebook, and Uber to be household names, but when enterprise software makes it into the mainstream, you know that change is afloat. Yet, go to any office, and you’ll hear people talking about “Slacking” over information or “Zooming” into meetings. Similarly, just a few years ago it would have been hard to imagine an enterprise software company being well known enough to make a direct listing viable. Yet, Slack direct listed onto the NYSE successfully, and other high-growth enterprise companies are considering following suit. A new breed of enterprise SaaS companies has achieved mainstream recognition due to their rapid mainstream adoption. Zoom has 50,000 corporate customers, and Slack has 10 million daily active users. Microsoft responded to Slack’s growth with a competitor called Teams, which quickly became the fastest growing application in Microsoft’s history. In only two years, it garnered 13 million daily active users and 19 million weekly active users. Notably, it’s not just startups, the typical early adopters for B2B technology, who are using these tools. As of their filings, 50 percent of the Fortune 500 were using Zoom, and 65 of the Fortune 100 were using Slack. Even more--91 of the Fortune 100--are using Teams. Slack, Zoom and now Microsoft’s customers recognize an increasingly clear business truth: The future of work requires agility--not just for startups but for every company. As Slack stated in its S1, “In an increasingly dynamic world, the fundamental business advantage is organizational agility.” Business agility comes in many forms, but at their core, products like Slack, Zoom and Teams are enabling companies of all sizes to gain agility in three critical competitive arenas: collaboration and communications, talent acquisition and talent management. AGILE COLLABORATION AND COMMUNICATIONS POWER AN ALWAYS-ON BUT NOT ALWAYS IN-PERSON WORKFORCE Zoom and Slack both increase the speed at which teams collaborate, whether team members sit in the cubicle next door or in a country across an ocean. Leveraging the fast broadband connections and ubiquitous mobile access that are now easily accessible to remote workers, as well as collaboration tools like Zoom and Slack, professionals can work anywhere, anytime, adopting a flipped workplace model in which physical in-person meetings take place only rarely and as needed. In fact, by 2025, around 70 percent of the workforce will work remotely five or more days per month. As usage has spread, communication tools like Zoom and Slack have ignited an expectation of immediacy that changes not just how teams communicate in the narrow sense, but more broadly and fundamentally how they work. Digitally-enabled instantaneous collaboration has gone so mainstream as to become an expectation in the modern workplace. In astonishingly little time, collaboration tools like Zoom, Slack and Teams have become integrated into the business workflow. It has become hard to imagine co-workers collaborating without them or similar tools. AGILE RECRUITING ENABLES COMPANIES TO HIRE THE BEST TALENT, NOT THE CLOSEST TALENT Today, top talent commands a premium. Even amidst concerns about the economy, employers still have far less leverage than they’ve had in decades. Employees have choices and can demand flexibility from their workplaces. That’s of course the appeal of WeWork and other co-working spaces for the physical workplace, but whether it’s at home, a co-working space or a Starbucks, distributed work is only now becoming more common because powerful tools like Slack, Teams and Zoom make working remotely as effective as working in person . That’s why there’s been such a surge both in the number of people working remotely--and in their incomes: Remote workers now make on average more than workers who commute. And it’s not just the actual work that can be remote: Even candidate interviews can be conducted remotely with Zoom. Ben Thompson from Stratechery famously compared WeWork’s potential to that of Amazon Web Services, but I think that analogy belongs to products like Slack, Zoom and Teams. In the same way that Amazon Web Services so dramatically reduced the cost of software development that it made software startups easier to launch than ever before, tools like Zoom and Slack are dramatically reducing the barriers to distributed working while ushering in new and exciting opportunities for startups to address net new challenges and opportunities like culture building, coaching and development and collaborative product design. In the old days (like a decade ago), startups would often launch with distributed teams in order to run lean. As they grew and took on venture funding, they were expected to settle in tech hubs. Those long-standing norms are being challenged because employees want to live where they want to live, not where their employers want them to live. In order to access top talent across the globe -- not just confined to tech hubs -- companies are increasing staying distributed longer, even as they scale. Auth0*, valued at more than $1 billion, and the high-growth company Zapier have both managed to reach scale with completely distributed workforces. Companies that initially set out to become distributed due to costs or the search for talent have discovered what other non-natively distributed companies are beginning to learn: that employee distribution, when well-implemented, yields organizational agility. If you want to have the most productive, agile and efficient team, you’ll need multiple offices--or otherwise no office at all. The basic requirements for making distributed work viable include communications and collaborations tools like Slack, Teams and Zoom, as well as emerging collaboration tools that are building out additional layers of lean management and organization agility. AGILE CORPORATE CULTURES ARE NOW TALENT MANAGEMENT BEST PRACTICES At the end of the day, we all long for emotional connections. That desire exists whether you work in an office or in your pajamas at home. That’s why it’s so powerful that Slack, Teams and Zoom facilitate the building of cohesive team cultures. Zoom rooms and Zoom meetings make it easy to run or join video meetings, which isn’t just great for customer conversations; it’s also great for team building. Virtual standups across offices are far more powerful through video conferencing than they are through voice alone. Some companies even facilitate buddy chats, essentially virtual coffee dates to help employees build camaraderie and alliances without ever having to meet in person. Through video conferencing, we can all feel connected even when we’re physically distributed. The more we become physically disconnected, the more we rely on technology to stay culturally connected. LOOKING AHEAD As more future of work startups set their gears on the enterprise market, leaders will only push harder to bring agility and flexibility into the traditional workplace. It’ll extend beyond teams to entire organizations seeking lean management principles. While organizational agility was once an accidental byproduct, over time it’ll become the goal, not just the outcome. That’s because at its core, organizational effectiveness requires doing the same or even more with less--which is increasingly impossible to execute without agility. Whether the economy is strong or weak, businesses always want to reduce costs while boosting productivity. These emerging products are a symbol of the future of work to come—or, quite simply, the future. Maybe the workplace of the future won’t be fully distributed for a while. Maybe more small and midsize employers will have a few offices instead of just one or two, but over time, much of the future of work will be largely distributed. It’s a secular trend not going away. Business leaders who recognize the future of work by integrating collaboration and communications tools into their business workflows; by leveraging these tools to recruit the best talent, regardless of where they live; and by integrating an agile corporate culture into their talent management practices will not only enjoy successful IPOs (or direct listings!) if they haven’t already; more importantly, they’ll comprise the S&P 500 of the future. Thank you to my colleague Allison Baum at Trinity for her research assistance with this piece.
b36f731299d3665f74d45756aaa0f92d
https://www.forbes.com/sites/karastiles/2017/11/01/heres-how-the-gender-gap-applies-to-retirement/
Here's How The Gender Gap Applies To Retirement
Here's How The Gender Gap Applies To Retirement Women are more likely than men to struggle financially in retirement. Getty Your golden years might be a little less golden if you’re a woman. In its 2016 study “Shortchanged in Retirement,” The National Institute on Retirement Security explored financial hardships facing employed women, women approaching retirement and retired women. Co-authored by Manager of Research Jennifer Brown, the study identified that women are much more likely to face poverty in retirement than their male counterparts. The analysis attributes the gender disparity to what it calls the dysfunctional “three-legged stool” of middle class retirement: social security, a pension and personal retirement savings. “After decades of restructuring in retirement benefits and stagnant household incomes, this three-legged stool is broken, especially for women,” report the authors. “Women are 80 percent more likely than men to be impoverished at age 65 and older.” Relying on those three traditional sources of retirement income still leaves many women struggling. Women’s higher likelihood of part-time employment, higher rates of caregiving, longer lives and the wage gap—women earn around 80% of what men earn, according to 2016 census data—are all cited as culprits in the financial plight of retiring women. Here are three key findings from the study: More women are working as they approach retirement. The percentage of women age 55-64 in the workforce increased from 53% in 2000 to 59% in 2015, hitting a high of 61% in 2010. There’s a glaring gender gap when it comes to retirement security. In 2013, women were 80 percent more likely than men to face poverty in retirement. Women’s varying backgrounds can impact financial stability in retirement. Factors like age, marital status and race can all impact the financial circumstances of retired women. For example, the gap widens as retirees age: women age 75 to 79 are three times more likely to be impoverished than men. The NIRS, a nonprofit, nonpartisan organization, advocates for public policies that strengthen financial security for retired Americans—especially those most susceptible to poverty—like enhancing social security for women and improving state-funded savings programs. To see retirement tips from Brown, who co-authored this study, explore Retirement Checkup, a feature that helps you pinpoint your retirement readiness and offers expert insight on how to improve your savings. Sources: The National Institute on Retirement Security, Bureau of Labor Statistics
0964daae35f16ce13c3d93c1eae6e1ea
https://www.forbes.com/sites/kareanderson/2012/09/19/become-an-opportunity-maker-with-others/
Become an Opportunity Maker With Others
Become an Opportunity Maker With Others Years ago, a board member brought me into a corporation to lead a team in creating two products that he felt would boost the stock price. Here's how it happened. In my vigorous interview of him for The Wall Street Journal, he described how the firm could fall behind without them, and I became fascinated by their capacity to scale. He read my article. Then, much to my surprise and his, as he later told me, as he is a very deliberate thinker, he called and offered me the job of leading the new product research and design team. Ah, what an unexpected and serendipitous opportunity for me to see business from the other side, I thought, so I rashly agreed. 1. When a Random Event Sucks You Into an Opportunity… “Success is random so court serendipity” ~ Frans Johansson There were only a couple of problems with my coming into the company as an outsider. As a journalist, I’d never led a team, did not have the relevant technical experience and was ten years or so younger that the mostly ex-military and highly technical folks I was to lead. Oh, and my new boss had expected that he’d be leading the team, bringing in more resources to do so.  That may be why, in his welcoming email, he directed me to the wrong office and didn’t inform my direct reports that I was arriving that day. One upside of being a business reporter is that it’s actually an accelerated learning experience. Yet, just as it’s one thing to “consult” with a firm, or report on it for a news story and quite another to actually be in the trenches, day to day, and attempt to accomplish something, especially when others are motivated to make you fail. Cobbling together what I learned by interviewing and observing business leaders, here is the approach I boldly, well blindly, took and what I learned during that sometimes wrenching yet ultimately satisfying project team experience: 2. Upfront, be Upfront “You don’t have to be loud to lead” ~ Erika Anderson When I finally found my office and introduced myself I asked for an all-hands meeting.  I’d brought along a longtime friend and unflappable graphic facilitator. Instead of the traditional introductions I just asked them to tell me their names, going around in a circle. Then I took an approach I learned from Richard Branson. I said that I had three goals for the meeting, one for “us” and one for each of them, and that Tom, our graphic facilitator, would draw our unfolding conversation pictorially on the white wall so we could literally focus, not on me or one of them, but on our conversation as this crucial first experience together. 3. Craft a Clear Top Goal to Create the Collective Context for Making Better Choices “Talent wins games, but teamwork and intelligence win championships” ~ Michael Jordan Like reverse engineering, starting by clarifying the end goal helps us stay on track as we move towards it. So we began by discussing the main benefits that the two products should offer and the markets they could serve, then prioritized both. The more specific we were in that conversation, the more quickly and easily we could communicate and agree on changes as we learned more in later stages. 4. How Can We Each Use Our Best Talents on Our Strongest Interests? "Look for the best in people to build a fantastic team" ~ Richard Branson Next, in light of our shared picture of the most important attributes we wanted in our two products, each person was asked to specifically describe the parts of the project where they most wanted to take the lead and why. In this step you learn a lot about your colleagues, from how much they understand themselves, how willing they are to be upfront about what they really want, and how articulately they can express themselves. • To identify your strengths consider reading Now, Discover Your Strengths by Marcus Buckingham. To better understand your temperament, read Mindset by Carol Dweck and Learned Optimism by Marty Seligman. Gradually, as we moved around the circle, it also became clear where we had overlapping talents and gaps. With that knowledge on the table, I asked for people to voluntarily negotiate who would take what lead when there were overlapping interests or talents. We then discussed what talents were missing and their recommendations of whom, in the company, would be the best fit to recruit and how. • To learn more about productive collaboration read Collaboration by Morten Hansen. 5. Agree on Rules of Engagement to Reduce Ambiguity and Hasten Trust “Teams should be able to act with the same unity of purpose and focus as a well-motivated individual.” ~ Bill Gates Counter-intuitively, rules, when jointly agreed upon, give a group more freedom. People are more likely to trust each other and get in sync faster when they have a shared view of acceptable behavior. This proves true whether they think they know each other well, or have never before worked together. Yet these benefits of better performance often don’t happen when those who must follow the rules are not allowed to participate in adjusting them, as some grieving family members believe, with the Rules of Engagement that Seal Team members must follow. Among the rules to consider include: • What technology will be used to collaborate so they everyone is seeing the same information, discussions and progress • What purposes call for in-person meetings, and how will they be conducted? • Exactly how do we collectively agree on changes? One of my favorite rules of engagement is to be what Erika Anderson dubs a “fair witness”, objectively reporting what happened, how it didn’t work, what we then did differently and what’s happening now. • Learn how to reinforce your rules of engagement and productively communicate by reading Talk, Inc. by Boris Groysberg and Michael Slind; Well Said! By Darlene Price; and Crucial Conversations by Kerry Patterson, Joseph Grenny, Ron McMillan and Al Switzler. 6. Take a Lean/Loose Approach to Productivity and Camaraderie “The pool of shared meaning is the birthplace of synergy” ~ Kerry Patterson, Crucial Conversations There are many payoffs for getting specific sooner. Collectively discussing these three topics upfront jump started our understanding of each other, set a direct and collaborative tone to the culture we jointly created – and would have not worked nearly as well if I’d not said, upfront, that after we had this very focused, lean meeting in the morning (and it did take up the full morning) we would have a “loose” time over a luscious lunch I’d ask my new secretary to have delivered at noon. I’m betting that Eric Ries would agree that a “lean” approach is helpful for any kind of organizational innovation. I’d cribbed that lean/loose lesson from a SVP at Siemens whom I’d interviewed in Berlin. • Learn more about the power of simplifying processes in The Laws of Subtraction by Matthew May. 7. Stick to Our Sweet Spot of Shared Interest “We need to play each others instruments.”
~ Steven Johnson That expressly identified shared sweet spot can be your group glue, holding your team together through rancorous conversations or otherwise tough moments. With the steps we made that morning we set the stage to be more frank and open with each other sooner, especially about where we disagreed, and when we needed help or had failed. We were more likely to connect rather than choke under pressure. I am not saying it was always easy after that first day yet the hard times were resolved more quickly and cleanly and were able to become close-knit around our strong sweet spot of shared interest and commitment.  One sign that we, as a team, were in sync with each other was that we recognized, at almost the same time, when someone wasn’t using her best talents nor adhering to our Rules of Engagement and were in unanimous agreement to ask her to leave. • Learn more in Change-Friendly Leadership by Rodger Dean Duncan and Being Wrong by Kathryn Schulz and Leading So People Will Follow by Erika Andersen. 8. Set the Stage for You and Your Company to Succeed, Going Social “Many ideas grow better when transplanted into another mind than in the one where they sprung up.”
~ Oliver Wendell Holmes, Sr. Those “Us” developing steps lay the groundwork, even today, as companies recognize that, to survive, they must become more social enterprises. Yet “social” only scales when both our behaviors and our technology reinforce collective action. I write this as Saleforce is hosting its huge Dreamforce conference, double the size from last year, across the Golden Gate Bridge from me in S.F. As Teresa Amabile and Steven Kramer suggest in The Progress Principle, we all yearn for more meaningful work where we get to use our best talents together, aptly supported by social enterprise software that leverages our capacity to accomplish greater things together. Then we are more likely to learn and innovate faster, by making what Peter Sims calls Little Bets. In so doing we became more resilient together, and better able to stay flexible and to recognize and seize the random events that Frans Johansson describes in The Click Moment that leads to breakthroughs. • To learn more about supporting your organization in going social, read Socialized by Mark Fidelman; Social Business by Design by Dion Hinchcliffe and Peter Kim; The Pursuit of Social Business Excellence by Vala Afshar and Brad Martin; and Smart Business, Social Business by Michael Brito. Now, are you ready to turn the page to the adventure story, you are truly meant to live as a sought-after Opportunity Maker with and for others?
81b4684640cde9e49603169dc1f0071a
https://www.forbes.com/sites/kareanderson/2012/09/30/how-to-succeed-since-success-is-random-and-savor-life-with-others/
How to Succeed, Since Success Is Random
How to Succeed, Since Success Is Random Little did Abigail Washburn know that her life would forever change after she went to a party where she heard a record of traditional folk and bluegrass singer and flatpicking guitar player, Doc Watson singing Shady Grove.  She had what Frans Johansson dubs a click moment. She’d already become proficient in Chinese because she planned to study law in China, with the goal of improving U.S. Chinese relations. A Life That Turned on a Dime, As Yours Can Too After hearing Watson, she decided she wanted to sing American folk songs and play the banjo so she headed to Appalachia. Several lessons later, she wound up at a Kentucky Bluegrass Music Festival, where she was sitting in a hallway one night, when strangers asked her to jam with them. A record executive heard her and put her under contract, so she went to Nashville where she wrote songs and began singing at concerts. Later this curly-haired American did take her banjo to China to sing their songs and hers in Chinese, spurring audiences to sing along. Eight years and hundreds of concerts later, Washburn shared her saga at TED where she recalled singing in a relocation camp in an earthquake disaster zone. Afterwards a young girl came up to ask her, “Big sister Wong, can I tell you about the song my mother sang to me before the earth swallowed her up?” I was moved to joy and tears, hearing Washburn sing  with her banjo, and tell her story of a life that turned on a dime. Several times. Looking back, haven’t there been unexpected time in your life where you met someone or heard an idea that positively changed the course of your life? Pivot to a More Profitable Possibility When you are open to serendipitous experiences you are more likely to have click moments, enabling you to pivot more than once into new, more successful directions as Mandy Williams discovered after spending “thousands of hours” crafting a business plan for her entertainment production company. Her first pivot came when her sister’s husband lost his job, she told Marcia Layton Turner for an Entrepreneur magazine article on the value of “Short but Sweet” business plans. After collaborating with her sister on how to get through the stress and money issues her sister faced they had a Click Moment. They realized that others, in this wobbly economy, needed similar financial planning skills. Again Williams tackled the task of creating a business plan, this time for a life-skills TV show, jump starting the business with a book they launched at a Neiman Marcus event where an audience member asked, “Why aren’t the lessons in your book being taught in the schools?” Click. They are now successfully selling a program for teaching personal financial literacy in schools. Courting Serendipity Invites More Productive Adventures Accepting that serendipitous events play a greater role in your life than those you have planned can make life feel more like an adventure, rather than a frightening series of situations against which we must protect ourselves. As well, others may be having similar click moments at the same time, an effect called Multiple Discovery. That happened with the invention of can openers, keyboards and rollaboards, notes Matt Ridley, citing What Technology Wants author, Kevin Kelly. Recognizing that collective click moments can happen can make us feel more interconnected. If Frans Johansson, author of the new book, The Click Moment, heard these true stories, he’d probably nod in quick understanding. Why?  Because, for example, the music moved Washburn to recognize, in several apparently random incidents, a more fruitful and satisfying way to use her talents. She recognized serendipitous moments where invention and reinvention can happen, and we can too. In this increasingly complex yet connected world, let’s see how to succeed with them. How Success Actually Happens We can be misled by some common beliefs about how success actually happens. Among them is the notion that companies and individuals must stick to a plan, rather than iteratively experiment, making what Peter Sims calls little bets, and Johannson build upon by recommending purposeful bets, one that cause you to take action sooner rather than later, then learn from what happens. As Silicon Valley venture capitalist, Randy Haykin told Johansson, “The most important goal of a business plan is to show that a team is moving in some coordinated fashion toward a goal. The plan itself will be outdated within the month.” Two often intertwined instincts are our deep desire to understand why unexpected things happen, and to want control over the events that affect us or our business.  This leads to another mistaken notion, especially in business. We are eager to believe that the success of a company or individual is determined by what they are doing right now, what Phil Rosenzweig dubbed The Halo Effect.  For example, as Johansson describes in his book,  “When Cisco is doing great, we can find an explanation for it. When it does poorly, we can find an explanation for that too… We would hate it if someone told us Cisco is doing well but we are not sure why…..” Yet, citing Duncan Watt’s findings in Everything is Obvious Once You Know the Answer, Johansson points out that, we are prone to accepting post-hoc rationalizations and to believing that our success is tied to our talent. Sure, as Malcolm Gladwell believes, some endeavors require unrelenting practice, perhaps 10,000 hours to master. They include basketball, chess, tennis and violin playing, Johansson found, yet a rapidly increasing range of lucky breaks spring from our capacity to recognize and seize unexpected opportunities.  Talent is over-rated, as Geoff Colvin points out. “Our power of predicting success is essentially zero” Instead, Johansson writes,  “We can now begin to see the contours of a real paradox. Study after study indicates that our power of predicting success is essentially zero.” In my interview with Johansson he mused, “It is fascinating how we are so willing to accept randomness in falling in love, the unexpected way it happens, yet we resist believing that unexpected factors affects much of the rest of our life.   Instead we should welcome understand how to benefit from the serendipitous moments that can spur innovation, and more.” Three More Reasons to Focus of Finding Click Moments “The faster the world changes, the faster other people or organizations can catch up with you. The speed of discovering and sharing new business practices, marketing campaigns, products, or services has reached a fever pitch.” “The interconnected universe we are building across cultures, industries, and other barriers makes for a hyperadaptive environment, one in which a logical approach to strategy will fare worse and worse when others can easily copy and adopt successful practices, quickly diminishing their advantage. “ “But this interconnectedness also increases the frequency of serendipitous encounters and unexpected insight and enables far greater rates of innovation. “ Stepping off your familiar path increases your chances of having click moments with disparate people with whom you share a strong sweet spot of shared interest.  Reinforcing that notion, Future Perfect author, Steven Johnson cites research that suggests that  “diversity trumps ability”: in other words, a large, diverse group of non-experts often outperforms a small group of experts.” Like Johansson, Johnson takes an optimistic view of our increasingly connected world. Johnson’s complementary prediction is that some of the most positive changes that will unfold in our fast changing worlds won’t happen because of traditional capitalism or government initiatives but rather from progressive peer networks, where shared-interest groups innovative faster. Two of my favorite examples of positive, proliferating innovation and camaraderie that can be experienced within the peer communities of shared interest are Quantified Self and Shareable. Unexpected, Click-Based Success Stories Click moments are key to many unexpected success stories, Johansson discovered. Here are two of my favorite click moments he cites in his book, which I paraphrase here to summarize: • Ben Silbermann and Evan Sharp met for a beer and discovered they both loved collecting. Click. Talking further they thought, “Why not develop a way to digitally display your collections?" That interest grew into a shared passion to create a service that looked obvious -- after it was created. That passion fueled their little bets so that, despite a lack of programming expertise, they managed to pull in others to create one of the web’s fastest growing social networks, Pinterest. • Stephanie Meyer had no writing experience and knew nothing about vampires, yet waking up from a vivid dream about them, she felt passionately driven to share the story she dreamed. Serendipitous events skyrocketed her to fame in our connected world. As Johansson notes, her Twilight books have broken J.K. Rowlings’ record of Harry Potter book sales, all been made into movies ”that have broken box-office records around the world,” according to Johansson. See Serendipity as a Way to Stay Relevant Here is another reason to adopt the click moment approach to your work and life. Meghan M. Biro, in her Forbes column, advocates reverse mentoring, a method that I believe spurs serendipitous discovery of unexpected shared sweet spots of mutual interest as well as social learning. Biro cites my former colleague at the Center for the Edge, John Hagel. “Formal schooling and degrees give workers about five years’ worth of useable skills” according to Hagel and others at Harvard Business Review. Consequently staying open to serendipitous introductions increases the chances you’ll meet the right people and learn the apt information, as banjo-tooting Washburn, did, to stay relevant and, better yet, keep opening adventuresome chapters to the life you are truly meant to live with others. Recognize the Kinds of Randomness That Attract Opportunity Johansson cites three types: The "specific instant when an unexpected connection or event changes the trajectory of success." Become comfortable with having several favors if you are making several small bets that iteratively teach you more about the most successful approach. That’s using “randomness to your statistical advantage.” Accept that “success must be a result of dozens, even thousands, of possible forces that change with every action and interaction.” Rely on your flexible, observant and wide-ranging interconnectivity with others rather than on a detailed plan of action What Makes Click Moments Different Than Other Ways of Finding Connections and Ideas? Recognize click moments in three ways, according to Johansson: “They tend to occur when two separate concepts, ideas or people meet. “They are impossible to predict as to when, how or where they will happen. “ You may recognize them because they often evoke emotional responses “such as happiness, awe or excitement." The Hook on Which to Hang Serendipitous Collaboration Johansson advocates creating a “large hook”  -- some visible service, product, event or other attractive force onto which others can latch so you can attract relevant people to your project. As humans, we are wired to be literal and tactile, so actually seeing something that you have created can spark other’s desire to improve, add to, or otherwise alter – or suggest a completely new and better direction as Mandy Williams and many others, have discovered, perhaps including you. Be the Glue that Bonds Us Together to Leverage  Our Greater Success If you found these ideas helpful, for accomplish greater things with others than you can’t do on your own, you may also be interested Help the Helper by Kevin Pritchard and John Eliot.  They describe proven yet unconventional practices from their work with sports teams for “building a culture of extreme teamwork.” Pritchard is the general manager for the Indiana Pacers and a former general manager for the Portland Trail Blazers and Eliot is a consultant to professional athletes and coaches. While I have coached five pro athletes on becoming quotable I am not an avid sports fan yet I found the methodology in this book, tied to apt and moving true stories, to be inspiring, not just for the success their approach has generated but also for the positive character qualities, inherent in it, including altruism, trust and courage. One of My Dreams I’d love to be on a conference stage, facilitating a lively conversation among Johansson, Johnson, Pritchard and Eliot about how to lead a meaningful life of camaraderie and accomplishment in this connected world.
a2249477cc57b3b5aca34107074780e4
https://www.forbes.com/sites/kareanderson/2013/07/21/what-to-do-before-happiness-happens-in-your-life/
What To Do Before Happiness Happens In Your Life?
What To Do Before Happiness Happens In Your Life? The second part of this story is the real clincher for considering that happiness can be a choice. An elementary school teacher in rural Arkansas made a bracelet of charms of each student in her class so she could continually remind herself of how she cared about each one, and her passion for teaching them.  Here’s the rest of the story. She wakes up each morning with the painful fatigue that most face when they have the chronic, erratic and incurable disease, multiple sclerosis. Many would give up and quit working yet some don’t, as Shawn Achor shows in his new book Before Happiness. Why do different people in the same situation find a way to feel happy and thrive while others get depressed, give up or worse? The answer is the cornerstone concept to Achor’s book. He travelled to fifty-one countries, speaking and conducting experiments involving people as diverse Tanzanian kids living in extreme poverty to UK bankers who didn’t get year end bonuses. Achor worked with organizations as diverse as The National Multiple Sclerosis Society, Zappos’ Downtown Project, Freddie Mac during the mortgage crisis and online learning group, CorpU. The secret, according to Achor is that people in the same situation “were literally living in different realities.” If you want to change your life, you first have to change your reality Here are Achor’s five steps towards experiencing more moments of happiness that he feels are also keys to feeling more engaged, motivated and alive.  I’ve taken the brash liberty to slightly alter them in ways that most help me, as you might want to do too, thus emulating his underlying “manage your reality” theme: 1. Choose the most valuable reality: See multiple ones in each situation and choose the one that is most likely to lead to positive growth. 2. Map your meaning markers: These are the specific things to identify to chart the best route to accomplishing your goal(s) towards that most valuable reality. 3. Find the X-spot: Use success accelerants to propel you more quickly toward your goal(s). 4. Cancel the noise: Boost your capacity to hear the helpful signals that reinforce your chosen reality and may attract the opportunities and resources that can reinforce that reality, and also dim the noisy signals that can distract you from that reality 5. Create positive inception: Amplify the effects of your positive mindset by contagiously spreading your positive reality to others. Here are some happiness-boosting points from the book and related ones from elsewhere: A. Pessimistic people “literally see a narrower range of opportunities and possibilities,” as Positivity author Barbara Fredrickson discovered. Erickson recommends a 3:1 ratio of positive to negative experiences for healthy living. Methinks I will strive for a higher ratio, more akin to John Gottman's "magic 5:1" for a healthy marriage. When feeling negative they are blind to many options and go into fight or fight mode. To counter that downward spiral of perceptions and behavior, Achor suggest you map out your options as soon as possible, focusing first on drawing the possible paths for a successful outcome because what you first map becomes most vivid in your mind. Then, look for “escape routes” to avoid worst possible outcomes. B. To buttress that habit, know that pessimistic people tend to see events that feel negative as being permanent, pervasive and personal. In Learned Optimism, Marty Seligman offers concrete ways to dilute or even eliminate those happiness-sapping responses. C. Just as football running backs run faster the closer they get to scoring, we can create a surge of energy and motivation when we are at the stage towards our goal when we deeply believe we can actually achieve it. “The X-spot is the exact moment,” Achor says, when “your brain realizes that attaining your goal is not only possible but probable, and it releases a potent stream of chemicals that help speed you up.” Notice, writes Achor, that “you work more diligently and efficiently when the completion of a big project is in sight?” Hint: find a way to make the end goal seem eminent so you benefit from the extra spurt of energy to make it be true. D.  You can also spur sales, using this effect by making a goal appear closer.   Clark Hull dubbed this the “goal gradient theory.”  Hull proved it in an experiment in which a coffee shop gave customers a stamp card that rewarded them with a free coffee once the card had ten stamps for cups they bought. The closer customers got to being eligible for the free cup, the more frequently they came in for coffee. In a variation on that experiment, cited by Francesca Gino in her book Sidetracked, car wash customers who were given a loyalty card with the first two stamps already on it were more likely to visit more frequently than those who got a stamp card without any stamps on it. E. When you want to become more motivated about reaching a goal – or you want others to become more engaged in it – look backwards and notice and discuss how much you have already accomplished. University of Chicago professors, Minjung Koo and Ayelet Fishbach call this nudge “escalation of commitment.” In behavioral economics this behavior is dubbed a “sunk cost” fallacy when we have made a bad decision yet are reluctant to change because we have sunk time, money, our reputation or other resources into the effort. F. Because new or difficult tasks use up more of the main fuel in your brain (glucose) and thus weaken your willpower: - Jump into your most important task before thinking much about it. You will be less likely to procrastinate or not work on it. That’s why morning runners sometimes have their clothes laid out by the bed the night before. Achor calls it the Twenty-Second Rule, “to craft a path of least resistance towards positive habits and away from negative ones.” - Do your most important tasks toward your goal during your prime time of the day. For many of us, that’s in the morning. - Give yourself more frequent rest breaks when doing things for the first time. That way you will be more motivated to persist towards your goal and more aware and thus happier about the progress you make. Learn more about how in Roy Baumeister’s superb book, Willpower. G. We are innately attracted to funny people -- perceiving them as “being smarter and more credible” discovered Achor, and the reason may be that “to comprehend or create humor” we must be “able to see a version of reality that others might miss.”  As you create your most productive reality you are able to experience positive emotions more often. Using Barbara Erickson’s language, you  “broaden and build” the range of realities you see. Consequently, Achor found that you are more likely to see and speak to the humorous side to negative events, and attract smarter support sooner. Hopefully this quick sampling of insights and suggested actions from Achor’s idea-packed book may nudge you to read more, on your path to greater meaning and happiness in life with others, and to share your favorite tips here.
959fae1a65dc22b4288a69d65051d07b
https://www.forbes.com/sites/kareanderson/2013/10/05/baby-bust-millennials-view-of-family-work-friendship-and-doing-well/
Baby Bust: Millennials' View Of Family, Work, Friendship And Doing Well
Baby Bust: Millennials' View Of Family, Work, Friendship And Doing Well What most shocked me in the often-startling twenty-year study of Wharton College graduates was that only half as many now plan to have children. Both women and men, in equal numbers, felt that way, yet their reasons are different, according to Baby Bust author, Stewart D. Friedman. Whereas millennial women, at least at Wharton in 1992, felt “motherhood fulfilled their need to help others” more now believe that they can serve the greater good by succeeding at work. On the other hand, for millennial men, “doing good” is increasingly connected to creating greater balance and harmony between work and family. They have become more egalitarian in relationships, including at work, and are less likely to think of themselves as the sole breadwinner, not surprisingly. Yet those of both sexes who wanted to be parents feel that, “don’t see a clear path toward it,” discovered Friedman. What Are Some OF Their Top Worries? They are more burdened by college debt, believe that work is more competitive today, and that they are less likely to attain their career goals than Gen Xers so they are more focused on job security. Plus the recognize that they’ll have to work about 14 more hours per week than 20 years ago. Consequently they more willing to accept what Friedman dubs “extreme jobs” and to job hop to get ahead. No wonder Dan Schawbel’s Promote Yourself is selling so well. Following Their Passion Through Work Feels Farther Off Somewhat poignantly, one woman who graduated in 2012, said “Our career paths seem to be structure around opportunities that emphasize working in a rigorous environment to gain exposure to an industry and to develop a skill set that is transferrable, with the ultimate hope that we can one day leverage this experience to move into a job that is more specific to our actual interests and desired lifestyle.” Sounds like more top management, in the war for top talent, should read Bob Sutton and Huggy Rao’s 2014 book,  Scaling Up Excellence, about how to support employees for a company to survive, and Steven Kramer and Teresa Amabile’s advice in The Progress Principle about cultivating happier, higher-performing employees by ensuring that they experience even small moments of meaningful, successful work, more often. Parity in Pay Has Caught Up, Upfront Anyway At least for starting salaries, where women lagged behind over $5,000 for the first year in their first post-college job in 1992, pay is the same for women and men. Plus, according to Friedman, “women do not see themselves as being at a disadvantage." Their Feelings About Love And Friendship Have Radically Changed Unlike Gen X men and women Wharton graduates who felt equally satisfied with their lives, millennial women are happier than their male counterparts about their “health, personal growth and friends.” Also, millennials don’t place as much value on long-term-friendships and parenting as Gen Xers.  Instead they rank friendship, second only to health, as being the greatest determinant of a successful life. Upside For Our Culture:  More Life/Work Fit Options Increasingly millennial men and women see the need for those in dual-career relationships to jointly decide who should lean in, when and how. Like many others of us, they are skeptical about having it all. Yet, as Friedman concludes, “Millennial men are increasingly motivated to experiment with new models for how both partners can have more of what each wants in life.” Now that’s heartening news for us workers of all ages, and for all parts of our lives. How timely to discuss this during National Work and Family month. No wonder this book garnered praise from The Athena Doctrine author, John Gerzema; Families and Work Institute’s Ellen Galinsky; New America Foundation’s Anne-Marie Slaughter; Sleeping With Your Smart Phone author Leslie A. Perlow; and Work and Family Researchers Network’s Ellen Ernst Kossek. As Friedman once said, “Work/life is not only a social movement to benefit the next generation of children in our society, it’s a field with powerful ideas for cultural transformation that compels businesses to make more intelligent and humane use of people and technology.” For starters, towards greater work/life balance, Friedman recommends provision of world-class child care, family leave, more and longer school days “like other industrial and Western countries”, portable healthcare, lower college costs and student loan rates, a required years of public service for post-secondary school youth -- and many more suggestions in this must-read book for anyone who is interested in how we, as a nation, can make our lives and work more satisfying. “Large-scale change is grounded in small steps toward a big idea,” according to Friedman.
8f5351b33d5ed62a6d03a0bc8e9a99cf
https://www.forbes.com/sites/kareanderson/2015/05/11/cultivate-productive-enthusiasm-in-yourself-and-with-others/
Cultivate Productive Enthusiasm In Yourself And With Others
Cultivate Productive Enthusiasm In Yourself And With Others “Enthusiasm is not the same as just being excited. One gets excited about going on a roller coaster. One becomes enthusiastic about creating and building a roller coaster,“ suggests Bo Bennett, and I heartily agree. Getting enthusiastic is a little like learning to breathe. Nobody can tell you exactly how to do it, but without it you’re in big trouble. No one but you can discover the compelling purpose or exciting goal that ignites enthusiasm inside you. Discover your strongest passion (and talent) by noticing the situations in which you are both: Feeling really good about what you’re accomplishing Attracting obvious appreciation and smart support For specific ideas about exactly how to find the situations in which the role you play makes you feel enthusiastic and competent, read Find Your Strongest Life by Marcus Buckingham, which he intended for women yet works equally well for men, I have found. "It is faith in something and enthusiasm for something that makes life worth living" --Oliver Wendell Holmes 1. Enthusiasm is born on the inside yet builds on the outside. In the daily grind of life you can lose touch with what really matters. There are so many routine decisions to make, so many challenges to be met, and so many burdens to carry, that you may get dispirited and act out an unbecoming side in yourself. However, as you connect with the enthusiasm planted deep within you, you’ll feel it begin to grow and grow. Soon, you’ll be back on track. For example, like smiling can improve your mood, some research shows that acting enthusiastic can make you feel more energized about a situation. “Enthusiasm is the electricity of life” --Gordon Parks Hint: It’s not the first mile of a long and arduous journey that gets to you — you’re excited about getting started. And it’s not the last mile — you’re thrilled about getting there. The miles that can drag you down are the long and tedious ones in the middle where you can’t see where you are coming from or where you are going. “None are so old as those who have out-lived enthusiasm” --Henry David Thoreau 2. Enthusiasm grows when you focus on opportunities and allies. Several farmers in Pennsylvania were sitting in a café, complaining about the increasing cost of electricity and the unpleasant task of disposing of all the waste their cows generated. But the Waybright brothers and their brother-in-law, who ran the Mason Dixon Farms decided to quit complaining about all the manure the cows were generating, and to do some generating of their own — electricity. As you might guess, many of the other farmers initially laughed at the project and some called it “Waybright’s Folly” (and other even less flattering names). Yet the brothers were able to build a power generator that runs on methane gas produced from heated manure from the 2,000 cows. Generating much of their own power, they cut their annual electricity bill from $30,000 to $15,000. Some nearby farmers felt victimized by their problems and reached out their Congressmen to complain about their miserable circumstances. But soon no one was laughing.  The Waybright brothers were selling some of their excess power to their once jeering neighbors. Farmers and agriculture ministers from around the world began to beat a path to the Mason Dixon farms. And their evolving, still successful businesses and can-do spirit have been passed down through the family. Enthusiasm — with all the good things that go with it — comes when you turn your eyes from the problem or circumstance and focus on the solution and opportunity. Cash can buy, but it takes enthusiasm to sell – or otherwise sway or collaborate. “Enthusiasm is the yeast that raises the dough” --Paul J. Meyer 3.  Enthusiasm thrives around solution-oriented people. Like smiling, enthusiasm is contagious. Unfortunately negativism and pessimism are far more contagious.  It is always easier to believe the worst than to work towards the best. It’s even worse when you’re tired, or have just suffered a severe setback. Seek out positive and competent individuals who also recognize their top talents and passions. Agree to give each other candid, concrete feedback – and a boost. Then enthusiasm is more likely to erupt, endure and be contagious. 4.  Enthusiasm recharges itself on momentum. Jerry Reed’s popular song of many years ago is apt: “When you’re hot, you’re hot!”  William Shakespeare put similar sentiments into the mouth of Julius Caesar: “There is a tide in the affairs of men, which, taken at the flood, leads on to fortune, omitted, all the voyage of their life is bound in shallows and miseries.’” Hint: Celebrate your greatest victories by plunging into even greater challenges.  Take full advantage of the momentum you gain with each hard-earned step.
b215274952f0d4a1ee71acedcce3dcba
https://www.forbes.com/sites/kareanderson/2015/09/01/share-in-ways-that-make-us-proud-to-participate/
Share In Ways That Make Us Proud To Participate
Share In Ways That Make Us Proud To Participate Why, exactly, did the famous #ALSIceBucketChallenge raise over $100 million within a couple of months? And what spurred people to support the challenge by creating more than 17 million videos that were viewed more than 10 billion times on Facebook alone? That contagious stunt, according to Shareology author, Bryan Kramer, involved a four-step formula for “making something crowdworthy.” Inherent in his approach was creating a visual vignette in which others can quickly picture a role they would be proud to play in it. Former movie executive and Tell to Win author Peter Guber calls this crafting a purposeful narrative. Kramer dubs this “making something crowdworthy.” In keeping with the mutuality mindset of enabling others to feel proud to participate, Human to Human (H2H) advocate, Kramer prefers the term “crowdworthy” rather than “going viral,” “because it puts the power into the hands of the people sharing it instead of celebrating the genius behind the person who created it.” Here are Kramer’s four steps to scaling avid participation in “our” campaign: Have a simple human concept Create a structured plan Invite people to the party Apply the rules of improv How The ALS Ice Bucket Challenge Followed These Steps The ALS Ice Bucket concept was a startling, easy-to-replicate stunt – get videoed reacting to the chilling experience of someone dumping buckets of ice over your head. The plan was a call for action evoked the scarcity nudge to respond quickly. The person featured in the video, who had ice dropped their head, invited three people to emulate the ice bucket dump within twenty-four hours. Calling on three people, usually friends, replicate the stunt, visibly evoke the “social proof” nudge that made those people proud to participate. Plus, naming three people each time, rapidly scaled participation, as a virtuous circle does. After all, who wants to feel left out? As in improv participants could and did add both variations to their version of the stunt – and new charities that could benefit from their doing it. And, as Kramer noted, “Group challenges started emerging in addition to individuals: entire NFL teams were challenging other teams together!” Perhaps I have primed the pump for you to read Kramer’s book, that offers over 300 actionable insights with relevant stories. Here are just two more tidbits from his responsive friends: Facebook expert Mari Smith: "The three highest shareability factors for content are things that make people either laugh, cry or say ‘Ahh.’” Twitter Power 3.0 author Joel Comm: “No matter where you are on social you have to find out where your engagement is. If you enjoy using it and your followers engage with you there, you should be there.” Those who are generous, helpful givers, as Give and Take author, Adam Grant and Bryan Kramer demonstrate, are most likely to have satisfying, accomplished lives with others. Kramer’s view on our near future: “ Right now we’re seeing threads of this fabric being woven into the human economy, but we’re rapidly moving into a culture where sharing becomes whole cloth.”
6e79919c69a96fa5a3e0e20e4b59f634
https://www.forbes.com/sites/kareanderson/2015/10/28/how-employers-and-employees-collaborate-to-create-better-work/
How Employers And Employees Collaborate To Create Better Work
How Employers And Employees Collaborate To Create Better Work Tours of Duty Spark Talent-Building and Engagement Most any kind of organization can foster innovation, talent building and camaraderie by adopting the tour-of-duty approach used at LinkedIn and advocated by its founder and co-author of The Alliance Reid Hoffman. “Employees might embark on a rotational, transformational or foundational tour of duty,” notes The Optimistic Workplace author, Shawn Murphy in describing LinkedIn’s culture. The three steps to such tours are: Learning the basics about the company. Participating in a transformational task, “such as starting a department.” Being trained by diverse others in the firm, to lead. Specificity is Key Self-Clarity to Mutual Understanding “Employees must commit for the duration of each tour in a ‘mutually beneficial deal, with explicit terms, between independent players.’ Hoffman even suggests a term sheet explaining what the company expects and what it offers, whether an exchange of contacts or help finding a job elsewhere,” Bloomberg Business journalist Bryant Urstadt wrote in characterizing the specificity in this LinkedIn policy that boosts it’s popularity and power for all parties. Facilitate the Networks That Keep Employees Engaged for Life Any company can optimize the shared learning from the tours of duty, by having an aptly designed enterprise social-enabled intranet, according to Enterprise Strategies founder and managing director, Andy Jankowski: “The last thing you or your employees want after a successful tour is the loss of that shared experience. Enterprise social networks allow for key collaborations to happen digitally -- in a format that is stored, searchable, findable and reusable. These networks provide the needed glue and context for meaningful and efficient knowledge sharing and engagement. As well, these networks can extend beyond the company firewall, and thus maintain and grow networks of current and past employees who share a common experience that can often continue to contribute to the company.” Greater Career Flexibility Can Help Companies and Employees Relatedly, when a company policy supports employees and their bosses to collaborate on a zig zag career path (rather than a traditional corporate ladder) that supports both the career and lifestyle goals of the employee and the overall mission of the company, all parties win, suggests Deloitte game changer, Cathy Benko and Molly Anderson in The Corporate Lattice. Like the tour of duty policy it’s a mass career customization way to retain top talent too. Mutual Mentoring Can Boost Self-Organizing and Innovation An under-utilized, no cost opportunity for cross-departmental learning and relationship building that can also spark serendipitous insights for innovation in a company is mutual mentoring. That can take several forms. For example, an aptly designed intranet could facilitate employees at all levels in finding employees with the exact expertise or experience from whom they could learn for an immediate, one-time or longer term need or interest. Rather than just spurring reverse mentoring, where 20-something employees guide older workers in, say, digital technology, why not encourage mutual mentoring across your company? Also some employees might seek colleagues with complementary talents who share a sweet spot of mutual interest that reflects a company need or possible opportunity. They might explore creating a self-organized team to tackle it, especially if the company encouraged such exploration, with guidelines as explicit as Reid Hoffman’s for tours of duty. What explicit employer/employee methods have you encountered that make work more productive, enjoyable and meaningful?
18a8d0180c42628460b3855864c380b8
https://www.forbes.com/sites/kareanderson/2016/02/28/how-to-turn-your-downside-into-a-distinctive-potent-upside/
How To Turn Your Downside Into A Distinctive, Potent Upside
How To Turn Your Downside Into A Distinctive, Potent Upside Craft a Vivid Visual Image of Your Core Mission Sabriye Tenberken wants to “kantharize the world.” Just like the wild chili plant in India, Kanthari, that has a “fiery taste that makes you sit up and take notice when you bit in it...yet also has medicinal properties,” she is dedicated to sparking the interest of disabled or otherwise marginalized individuals (who are usually ignored or worse) to become able change makers. To do so, she co-founded Kanthari, a center with an experiential learning curriculum for them. Get Specific And Selective About Your Core Services Training at the center is restricted to “just” five much-needed and practical skills for would-be change makers: business modeling, dealing with government, raising fund, public speaking and working with the media. Take The Often Ignored Last Step To Support Those You Serve To boost the chances that participants will gain purposeful work when done, each one is connected to potential donor agencies so they are equipped to work, in their home country, on a project when they graduate.  One graduate runs a mobile library for prisoners in Thailand. A blind student cares for bees in Uganda, selling their honey in Italy. Play To Your Passion And Your Particular Strengths Now that you have seen what she has accomplished, here is Sabrye Tenberken’s astounding backstory. She’s blind, with a sighted business partner Paul Kronenberg. Prior to co-founding this center she rode, solo, on horseback through Tibet and became moved to set up a school for the blind. Currently she takes daily swims, defying many disabled stereotypes, thus modeling the behavior she seeks to instill in the students who have felt the stings of rejection. Read 38 other audacious, fascinating stories of “disabled” individuals who have created resourceful organizations, against all odds that opened opportunities for marginalized individuals You Can Be Smarter And Wiser by Meera Shenoy and Prasad Kaipa. Piggyback On a Familiar Phrase or Name Braille Without Borders is the name of the Tibetan school Sabrye co-founded to teach blind children skills, including vending and weaving, so they could be financially independent. This is transformative, expanding perception “borders” for the blind and the people around them, as Shenoy and Kaipa point out: “Tibetans believe in karma, that blindness is punishment for wrongs in a past life. Sabrye discovered children kept in dark rooms, tied to a bed, thus they couldn’t even walk. Make The Most of An Upside Created By Your Downside Because Nipun Malhotra is wheel-chair bound, via a rare congenital disease- Anthrogryposis, living in India, he knows, first hand how difficult it is to go places. He could not drive nor get into others’ cabs or cars. Yet he turned many setbacks, from childhood onward, into opportunities to excel in some way: In school where peers and teachers did not know how to “deal with a disabled boy” he discovered he “excelled in mathematics, calculating in his head rather than on paper He was the most well-read kid in school as he was excluded from most school activities, and ranked highly in several academic standards so “he began to be recognized as a nerd, rather than a disabled boy,” writes Meera Shenoy, also founder of Youth4Jobs. In college he continued to migrate from shunned outsider to valued advocate. He joined forces with the “Enabling Committee” of disabled students to advocate for wheel chair ramps and for JAWS software for the visually impaired. Next, via the Nipman Foundation he started, Nipun continued to attract allies. Together, they to pressed government agencies to make voting, railways and others vital systems easier for the variously disabled to use. Like Nipun, wheelchair-bound, short, polio-affected Nirmal Kumar turned one of his major impediments into an opportunity to start a business that served others who also had trouble with conventional transportation. After being over-charged for an autowallah (cab) ride, and other unfair or unsafe transportation experiences, he founded G-Auto to provide “friendly and reliable auto service.” Key to his success was a no haggling rule, and enabling his drivers to have incentives like medical insurance. Also he supported his drivers in further service differentiation: providing newspapers, magazines, water, mobile chargers – and a number that customers can call in complaints. And he keeps evolving the value his drivers can offer. Hint: mutuality matters. Nirmal told Meeru, “My aim is to keep all stakeholders happy.” His success has attracted stiff competition from multinational Uber, which is emulating his G-Auto business model, dubbed Uberauto. Yet, because of Nipon’s smart and wise approach to serving all stakeholders better and better over time -- not less, and less – which firm do you believe might ultimately win in India? Our Possible Conclusion: If these extraordinary individuals can overcome extreme physical limitations and societal prejudice to serve the greater good and create successful businesses then perhaps all of us can reflect on how we can translate our difficult experiences into smarter, wiser approaches for greater impact.