id stringlengths 32 32 | url stringlengths 31 1.58k | title stringlengths 0 1.02k | contents stringlengths 92 1.17M |
|---|---|---|---|
9baf936a6ef35ec68ca5fbc6b6b2b7c8 | https://www.forbes.com/sites/kaeliconforti/2021/01/19/the-worlds-most-picturesque-golf-courses-according-to-instagram/ | The World’s Most Picturesque Golf Courses, According To Instagram | The World’s Most Picturesque Golf Courses, According To Instagram
Northern Ireland's Royal County Down golf club took the top spot as the world's most Instagrammable ... [+] golf course. ASSOCIATED PRESS
As of this writing, the Centers for Disease Control and Prevention (CDC) is recommending a pause in travel due to the Covid-19 pandemic, so consider this story as a way to plan for future trips.
Traveling vicariously through social media apps like Instagram continues to be a trend as more and more people stay home in an effort to help curb the spread of Covid-19. Surprizeshop, a popular ladies golf company based in the U.K., recently revealed their list of the 30 most Instagrammable golf courses around the world, all great places to add to your post-Covid-era bucket list and visit whenever it’s safe to travel again.
Here’s a look at the results for the 30 most Insta-worthy golf courses in the world, according to Instagram users:
1. Royal County Down Golf Club in Northern Ireland, U.K., with 89,293 images
2. Augusta National Golf Club in Georgia, U.S., with 71,132 images
3. Quinta do Lago in Almancil, Portugal, with 70,927 images
4. TPC Sawgrass in Florida, U.S., with 45,205 images
5. Old Course in St. Andrews, Scotland, with 35,731 images
MORE FOR YOUThe Future Of Burning Man Emerges At Fly Ranch, An Outrageous New World In The Black Rock DesertNorway’s $325 Million Ship Tunnel Gets Go-AheadMarch U.S. Travel Restrictions By State–Quarantine And Covid-19 Test Requirements
6. Bandon Dunes Golf Resort in Oregon, U.S., with 22,648 images
7. PGA National Golf Club in Florida, U.S., with 22,319 images
8. Bethpage Black Course in New York, U.S., with 18,711 images
9. Pebble Beach Golf Links in California, U.S., with 16,254 images
10. Whistling Straits Golf Course in Wisconsin, U.S., with 15,184 images
11. Trump International Golf Links in Scotland, U.K., with 13,972 images
12. Royal Portrush Golf Club in Northern Ireland, U.K., with 11,989 images
13. Shadow Creek Golf Course in Nevada, U.S., with 10,486 images
14. Pinehurst Resort in North Carolina, U.S., with 8,635 images
15. Barnbougle Dunes Golf Links in Tasmania, Australia, with 7,667 images
16. Trump Turnberry Ailsa Golf Course in Scotland, U.K., with 7,354 images
17. Le Golf National in Guyancourt, France, with 7,248 images
18. Cypress Point Club in California, U.S., with 7,139 images
19. Spyglass Hill Golf Course in California, U.S., with 6,918 images
20. Cabot Links Golf Resort in Inverness, Canada, with 6,829 images
21. Yas Links Abu Dhabi in the U.A.E., with 6,412 images
22. Teeth of the Dog Golf Course in La Romana, Dominican Republic, with 5,502 images
23. Cabo del Sol Golf in Cabo San Lucas, Mexico, with 5,236 images
24. Leopard Creek Golf Club in Mpumalanga, South Africa, with 4,435 images
25. Carnoustie Golf Links in Scotland, U.K., with 4,385 images
26. Skibo Castle in Scotland, U.K., with 3,772 images
27. Monte Rei Golf Club & Country Club in Vila Nova de Cacela, Portugal, with 3,401 images
28. St. Andrews Links in Scotland, U.K., with 3,325 images
29. Castle Stuart Golf Links in Scotland, U.K., with 2,588 images
30. The Olympic Club in California, U.S., with 2,098 images
To get the results, the folks at Surprizeshop created a list of the most well-known golf courses around the world, then ranked them from most- to least- Instagrammed by determining the number of images associated with hashtags correlating to each golf course. While the study showed the majority of the most Instagrammable golf courses were located within the United States and the United Kingdom, a number of other contenders rounding out the top 30 were located in Portugal, Australia, France, Canada, Abu Dhabi, the Dominican Republic, Mexico and South Africa.
Golf has remained a relatively safe and popular activity throughout the pandemic thanks to players’ ability to social distance while outdoors and proactive efforts by golf courses to keep everything clean between rounds. If you do decide to book a golf getaway anytime soon, be sure to follow the health and safety rules and regulations where you’re visiting, wear a mask over your mouth and nose whenever you’re in public and do your best to stay at least six feet from anyone outside your group.
|
09f4cb7ecd76183e05630ad9a6e077e3 | https://www.forbes.com/sites/kaeliconforti/2021/01/21/planning-a-trip-to-abu-dhabi-heres-what-you-need-to-know/ | Planning A Trip To Abu Dhabi? Here’s What You Need To Know | Planning A Trip To Abu Dhabi? Here’s What You Need To Know
Sheikh Zayed Grand Mosque in Abu Dhabi, United Arab Emirates. getty
Thanks to new health and safety regulations including Covid-19 testing before and during your trip, mandatory 10-day quarantine for those visiting from certain countries and strict cleanliness and hygiene standards through the new Go Safe Certification program, international visitors can once again visit Abu Dhabi, one of the most exciting cities in the United Arab Emirates (U.A.E.), which officially reopened to tourism on December 24, 2020.
“We are delighted that international tourists will soon be able to explore the diverse offerings of our beautiful emirate once again, knowing that Abu Dhabi is one of the safest travel destinations in the world,” said Undersecretary of the Department of Culture and Tourism H.E. Saood Abdulaziz Al Hosani. “I am proud of the agility and innovation that DCT [Department of Culture and Tourism] Abu Dhabi has shown this year as we have reimagined our approach to culture and tourism, to ensure we can welcome global travelers to our vibrant emirate while protecting the wellbeing of our community and tourists, which remains our top priority.”
If you’re thinking of visiting Abu Dhabi anytime soon, here’s what you need to know to ensure a safe and healthy trip—for you and those who live there.
What You’ll Need To Enter Abu Dhabi
According to Abu Dhabi’s tourism website, all travelers must present negative results from a Covid-19 PCR test administered within 48, 72 or 96 hours of travel (depending on where you’re coming from), take a second Covid-19 PCR test upon arrival at Abu Dhabi International Airport and wait 90 minutes for the results to be determined. If you’re going to be staying in Abu Dhabi for at least six days, you’ll need to take another Covid-19 PCR test on day six—the same goes for day 12 if you’re staying at least 12 days.
Where things change depends on which country you’re coming in from. Those traveling from a country on Abu Dhabi’s Green List will be able to skip the mandatory 10-day quarantine that follows once you leave the airport. As of January 25, those countries include Australia, Bahrain, Brunei, China, Falkland Islands, Greenland, Hong Kong, Maldives, Mauritius, Mongolia, New Zealand, Oman, Qatar, Saudi Arabia, Singapore, St. Kitts and Nevis and Thailand. Check the link above, which is being updated regularly with the latest information.
Anyone not traveling from one of the aforementioned countries will need to quarantine for 10 days even if you have negative test results. Abu Dhabi’s not taking any chances, so you’ll be expected to register with the official contact tracing system and wear an electronic wristband throughout your 10-day stay at your hotel or other accommodations determined by the medical authorities. Once you take another Covid-19 PCR test on day six (and day 12 if you’re staying longer), the wristband will be removed and you’ll be free to enjoy the rest of your time in Abu Dhabi.
MORE FROMFORBES ADVISORMaster List Of Destinations Coordinating Simple Covid-19 TestingByBecky PokoraContributor
Additional Travel Considerations
Once in Abu Dhabi, mask-wearing is mandatory in all public places, with steep fines in place for anyone who doesn’t follow the rules. Remember to wash your hands thoroughly for at least 20 seconds and social distance by staying at least six feet from anyone outside your group.
While travel insurance isn’t required to enter the Emirate, it’s strongly advised that travelers have a policy that covers you in case you get Covid-19. Now through March 31, 2021, all flights booked with Etihad Airways include Covid-19 Global Wellness Insurance Cover so you’ll have one less thing to worry about. Coverage for medical expenses lasts up to 31 days from the start of your trip so if you end up testing positive at some point while you’re away, contact Etihad’s Covid-19 Assistance Team to make arrangements. The only thing that’s not reimbursed under the plan? Covid-19 PCR tests.
Abu Dhabi’s Go Safe Certification program has been particularly successful, with more than 95 percent of hotels, four Yas Island theme parks, 33 malls, two cinemas and the Yas Marina Circuit passing inspections with flying colors since its inception in June 2020. While the goal is to reach 100% certification across tourism, commercial and retail services, it’s safe to say the joint venture by DCT Abu Dhabi, the Abu Dhabi Department of Economic Development, Department of Municipalities and Transport, Etihad Airways and several other partners is off to a good start.
|
52d12c5b3758a593350019e37ffbafb2 | https://www.forbes.com/sites/kaeliconforti/2021/01/22/hipcamp-reveals-the-best-camping-experiences-in-the-us/ | Hipcamp Reveals The Best Camping Experiences In The US | Hipcamp Reveals The Best Camping Experiences In The US
The Coop Cabin was voted the best Hipcamp to visit in Alaska this year. Photo courtesy of Hipcamp
As of this writing, the CDC is recommending a pause in travel due to the Covid-19 pandemic, so consider this story as a way to plan for future trips.
As the Covid-19 pandemic continues to rage on in the United States, camping has become a great way for couples, families and solo travelers to take a much-needed break from compact city apartments and get back to nature in a safe, outdoorsy environment while staying away from others.
Hipcamp—a popular Airbnb-like website where you can find and book anything from tent camping and luxury glamping experiences to unique accommodation options like treehouses, cabins and RV parks—recently conducted its annual Best Hipcamps to Visit survey, allowing readers to vote for their favorites in each state (except for North Dakota and Rhode Island). Here’s a look at the winners and their starting prices, listed below by state:
Alabama: Parksland Retreat, from $25 per night.
Alaska: The Coop Cabin, from $79 per night.
Arizona: Jason M.’s Land, from $55 per night.
Arkansas: Red Fern, from $25 per night.
MORE FOR YOUNorway’s $325 Million Ship Tunnel Gets Go-AheadThe Future Of Burning Man Emerges At Fly Ranch, An Outrageous New World In The Black Rock DesertMarch U.S. Travel Restrictions By State–Quarantine And Covid-19 Test Requirements
California: Madrone Tree Hill, from $48 per night.
Colorado: Glen Isle Resort, from $35 per night.
Connecticut: Mickelberry Forest Gardens, from $45 per night.
Delaware: The Draper Property, from $55 per night.
Florida: 4A River Camp, from $50 per night.
Georgia: RV camping at Brooks Lake, from $45 per night.
Hawaii: 9thWAVe #OffGrid on Maui, from $150 per night.
Idaho: Windbreak Farm Alpacas, from $40 per night.
Illinois: Wine Trail Wilderness, located near several wineries, from $25 per night.
Indiana: Black Walnut Grove at Happy Hollow Homestead, from $30 per night.
Iowa: Tipi Camping at Walking Stick Adventures Farm, from $96 per night.
Kansas: C2T Ranch on the Saline River, from $30 per night.
Kentucky: Hidden Lake Farm, from $50 per night.
Louisiana: Cajun Retreat Campsites, from $47 per night.
Maine: Howling Goat Farm and Grange, from $25 per night.
Maryland: Willet Family Farm, from $25 per night.
Massachusetts: “Find Your Muse” Camping at Rosy Goat Farm, from $48 per night.
Michigan: Camp Dubonett, from $45 per night.
Minnesota: Gilles Family Dairy & Woodland’s Wooden Tent in the Woods, from $56 per night.
Mississippi: Rest, Relax and Rejuvenate, from $30 per night.
Missouri: Hummingbird Hollow Farm Sanctuary, from $25 per night.
Montana: Devon Allan T.’s Land, also called the Lee Metcalf Wildlife-ELUV8 Complex, from $25 per night.
Nebraska: Pfanny’s Farm, from $20 per night.
Nevada: Stagecoach Acres, from $30 per night.
New Hampshire: Christopher S.’s Land, Franconia Forest, from $30 per night.
New Jersey: Neil S.’s Land, where you can stay in a fully equipped RV on 35 acres of preservation land, from $189 per night—a four-night minimum stay is required.
New Mexico: Enchanted Circle Campground, from $55 per night.
New York: The Heron Campground, from $30 per night.
North Carolina: Dark Ridge Hide Out, from $20 per night.
Ohio: Fruitdale Farm, from $25 per night.
Oklahoma: Sky Valley Acres at the Corona Westmuckett Homestead, from $15 per night.
Oregon: Pasture Camping on Boden Acres, from $33 per night.
Pennsylvania: Scenic Turtle Ridge, from $59 per night. Note that only Pennsylvania residents able to show a valid driver’s license are allowed to stay during the pandemic.
South Carolina: Table Rock Tea Company, from $39 per night.
South Dakota: Uncle B’s Farm, from $29 per night.
Tennessee: Belle and Beau Acres, from $38 per night.
Texas: Seco Ridge Campgrounds, from $30 per night.
Utah: Genny L’s Land—home to a sanctuary for alpacas, goats, chickens and horses—from $35 per night.
Vermont: Forested Tent Sites at Birdhous, from $28 per night.
Virginia: Blackwater Birds and Bees, from $49 per night.
Washington: Heaven on Earth, from $55 per night.
West Virginia: The Byrds Nest on the River, glamping from $65 per night.
Wisconsin: Beaver Haven at Free Spirit Land, from $24 per night.
Wyoming: Jim Moss Arena Campground, from $25 per night.
To get the results, Hipcamp staff analyzed all bookings, ratings and reviews made by users to determine the best-performing listings. Using those as the official nominees from each state—minus North Dakota and Rhode Island—the Hipcamp community was then invited to vote for their favorites.
The winners showcase a diverse blend of U.S. camping experiences, including RV and tent campsites, glamping, campsites close to wineries and on working farms and animal sanctuaries, indicating that people are looking for more than just the chance to sleep out under the stars when they go camping. For now, camping in any style makes a great option for those who want to get some fresh air, be safe and enjoy the country’s wide open spaces. If you decide to give it a try, remember to wear your mask over your nose and mouth whenever you’re in public and stay at least six feet from others.
|
bb12e2201d783da377ddbff67712c361 | https://www.forbes.com/sites/kaifalkenberg/2011/03/14/attorney-consulted-hb-gary-about-spiking-defamatory-content-by-any-means-necessary/ | Attorney Consulted HB Gary About Spiking Defamatory Content "By Any Means Necessary" | Attorney Consulted HB Gary About Spiking Defamatory Content "By Any Means Necessary"
Image via Wikipedia
Much has been written about Anonymous's posting of 71,800 e-mails from security firm HB Gary Federal on the Internet (viewable here). But so far no one has focused on the intriguing e-mail sent by New York lawyer Sean Kane last April seeking HB Gary's assistance on behalf of a request from a client that claimed he had been defamed online. Kane sought information on "consultants" that could help his client spike the defamatory information from the Internet.
Kane's e-mail is part of dozens authored by attorneys that are now part of the trove of HB Gary e-mails readily searchable on the site that Anonymous has created. The mass e-mail disclosure has already led to the resignation of HB Gary Federal's CEO Aaron Barr and to a disciplinary complaint against three partners at Hunton & Williams. That was followed by a letter by over a dozen members of Congress calling for an investigation into Hunton & Williams's alleged "dirty tricks campaign" revealed in the HB Gary e-mails.
New York IP lawyer Sean Kane is one of many other attorneys whose e-mails were compromised in this scandal. In the e-mail (available here), Kane tells HB Gary founder Greg Hoglund that a client of his wants to have defamatory content removed from a website "by any means necessary". Kane writes that he advised his client of his legal options but the client wants to pursue more "aggressive" means. He adds that he told his client that "there are consultants that do this type of work" but that he does not have contact with them and that not everything they do "is on the up and up" so he could not retain them. Kane says he "assumes this is not something that" HB Gary does but asks if they have some information about where Kane's client can turn. The Anonymous database does not appear to include any e-mail response from HB Gary execs to Sean Kane. Kane did not respond to multiple requests for comment.
It's not altogether surprising that people are often unsatisfied with pursuing these matters through legal means. My colleague Andy Greenberg tells me that foreign governments have sometimes employed hackers to remove unfavorable information. But until now I hadn't heard of paying hackers to "resolve" defamation claims.
While the Kane email is intriguing, it's the publication of HB Gary's attorney-client privileged e-mails that have far broader implications. The database includes dozens of e-mails between HB Gary execs and their outside counsel that are labeled attorney-client privileged. Each of them has the obligatory disclaimer at the bottom advising that the e-mail is "intended only for the personal and confidential use of the addresee(s) named above". And yet, not only are the e-mails now viewable by anyone -- they even come up on a Google search. There are numerous privileged e-mails between HB Gary Federal executives and their trademark counsel. (See, for example, here.) There are also e-mails between HB Gary Federal and a firm they retained to explore a sale of the company.
The public availability of all of these e-mails raises a larger question in my mind: what is the future of attorney client privilege in this post-Anonymous world? Lawyers always operate on the assumption that their e-mails are potentially discoverable in litigation and might be reviewed by some unfortunate low level associate. But the notion that privileged e-mails could make their way to the Internet in a searchable database accessible to Google seems to mark a whole new era of lawyering.
I'm interested to hear if others think the same.
|
abb9ed9a3183a8d69a0e0daf1fa92147 | https://www.forbes.com/sites/kaifalkenberg/2011/11/18/one-idea-for-reducing-health-care-costs-keep-non-emergencies-out-of-the-er/ | One Great Idea for Reducing Health Care Costs: Keep Non-Emergencies Out of the ER | One Great Idea for Reducing Health Care Costs: Keep Non-Emergencies Out of the ER
As part of Forbes.com's Human Ingenuity series, we ask staff writers, contributors and experts to weigh in with solutions to some of the nation's biggest problems. This month's focus is on reforming health care. We asked for responses to the following question: What's one great idea for reducing health care costs that should be replicated across the country? My response below is the first in a series of posts that will be weighing in on this critical issue.
Much of our skyrocketing health care costs are spent on unnecessary emergency room visits. A Rand Corp. study last year found that we spend $4.4 billion annually on people who use the ER for routine, non-urgent care. Though studies vary on the percentage of inappropriate ER visits (the CDC says its 8% but Health Affairs puts it at 27%), no one disputes that these patients could get better and less expensive care elsewhere.
To crack down on "frequent flyers", as patients who repeatedly use the ER are known, some states have adopted aggressive measures. Washington state is trying to limit its Medicaid recipients to three non-urgent emergency room visits per year. After that, they have to pay for the visits out of their own pockets. (A judge blocked implementation of the plan last week but policy makers say they'll find a legal way to do it.) Florida is tackling the problem by seeking to charge Medicaid patients $100 each time they use the ER for routine care.
These efforts may keep costs down in the short term by keeping folks with routine ailments out of the ER. But they don't address the larger problem of redirecting the patients to primary care physicians that will coordinate their care and keep them healthier.
My story profiling the new Center for Health Care Delivery Science at Dartmouth College highlights one pilot project addressing precisely that concern. A team of physicians from Lebanon, N.H.'s Dartmouth-Hitchcock and Hartford, C.T.'s Saint Francis Care are spearheading the project as part of their participation in the Center's newly launched graduate program. The team is first trying to identify the reasons why patients with non-emergent conditions end up in the ER. Once they've processed that data, they intend to devise and implement recommendations for redirecting those patients to more appropriate lower cost care.
The team has an innovative approach to dealing with patients who end up in the ER because they don't have a primary care doctor or have one but chose to go to the ER instead. Their aim is have ER staff become more engaged in the long term care of these frequent flyers. To do this, they hope to implement a two-pronged system. One part of that is making sure all patients who pass through the ER have a primary care facility they can go to for routine care. So even if they are in the ER for a real emergency, they will be referred to an appropriate primary care facility or patient-centered medical home before they leave. For those who come to the ER with non-emergency ailments, the docs hope to implement a system that would give those patients two options. They could wait however long it takes to see an ER doctor (likely several hours) or the ER staff would set them up with an appointment with a primary care physician within 24 hours. The Dartmouth team believes many patients will actually prefer the latter choice -- a decision that studies show can save up to $900 per patient.
The project is still in its early stages -- we'll know whether it achieves its objectives in about a year. If successful, the team hopes to disseminate these best practices to emergency rooms nationwide.
|
50026693e7369ec498fb4bc195d09931 | https://www.forbes.com/sites/kaifalkenberg/2013/01/10/fda-takes-action-on-ambien-concedes-women-at-greater-risk/ | FDA Takes Action on Ambien; Concedes Women at Greater Risk | FDA Takes Action on Ambien; Concedes Women at Greater Risk
Zolpidem (Photo credit: Wikipedia)
The FDA announced this morning that it is recommending the bedtime dose of Zolpidem (brand name Ambien) be lowered due to the risk of next-morning impairment. In a press release issued this morning the FDA says it recommending the change "because new data show that blood levels in some patients may be high enough the morning after use to impair activities that require alertness, including driving." The FDA is requiring manufacturers of Ambien, Ambien CR, Edluar and Zolpimist to lower the recommended dose "because use of lower doses of Zolpidem will result in lower blood levels in the morning."
In an article published in Marie Claire this October, I wrote about the dangerous side effects of Ambien -- particularly sleep driving, a form of sleep walking in which users get into their cars and drive without consciously intending to do so. The article profiled a number of tragic incidents resulting from this dangerous side effect. It also raised concerns that, according to recent studies, female users of Ambien were at greater risk.
Today's announcement by the FDA substantiates those concerns. According to the release: Women appear to be more susceptible to this risk [of next day impairment] because they eliminate zolpidem from their bodies more slowly than men." In a landmark shift, the FDA is now recommending that manufacturers lower the recommended dose of zolpidem for women from 10 mg to 5mg for immediate release products (Ambien, Edluar and Zolpimist) and from 12.5 mg to 6.25 mg for extended release products (Ambien CR).
For more on the dangers of Ambien side effects, read my Marie Claire story here and my post on Kerry Kennedy's Ambien-related car crash.
Update: On a press briefing call earlier today, Dr. Ellis Unger of the FDA's Office of Drug Evaluation clarified the rationale for the FDA's decision to lower the recommended Zolpidem dosage for women. He explained that it was not based on any one specific incident but on an accumulation of knowledge over a period of time, including recent driving simulation and other studies. By lowering the dosage, Dr. Unger explained, the FDA hopes to reduce the incidence of not just "next day impairment" but also of "sleep driving," the side effect I highlighted in my recent Marie Claire story. I'll link to the full transcript of the FDA call once it's released.
|
0f53fdc31a90b7b4436f0c0cf3e87514 | https://www.forbes.com/sites/kaifalkenberg/2013/08/09/how-to-get-your-teen-to-unplug-and-like-it/ | How To Get Your Teen To Unplug (And Like It) | How To Get Your Teen To Unplug (And Like It)
No computer? No internet? Awesome! (Photo credit: photosteve101) This is a guest post by Ethan Rosenberg a rising senior at the Abraham Joshua Heschel School in New York City.
The constant battle between parents and teens these days is not over using the car or staying out late or even talking on the phone. It’s about time spent online. And here’s the thing – as a teen, I’ve realized this time the parents are actually right. How’d I reach this conclusion? Was it nagging? Was it threats? Nope. Four weeks without internet. I spent the past month as a counselor at a summer camp counselor in rural New Hampshire. Completely off the grid. And guess what? I loved it.
Everybody does it, all the time
A 2012 Pew study concluded that 94% of teens use Facebook , 31% use Twitter, and 28% use Instagram. And we’re using these social networks for an average of 1.6 hours per day. At my high school in New York City, most of my classmates, instead of talking face to face in the hallway in between classes, are absorbed in their phones texting, Snapchatting, or posting to Instagram. Obsessed with their online “social life,” they avoid real world interactions with their peers -- even if they’re standing literally three feet away.
It’s really easy, and really satisfying
Why are teens so consumed with their virtual selves? It provides an instant reward. Within minutes of posting a picture on Instagram, a user can easily get more than 20 “likes.” This has become a substitute for the kind of real life feedback that used to make a teen feel “popular.” Sites like Facebook and Instagram act like a virtual high school cafeteria. At my school, I spoke to a couple of middle schoolers about their use of social media. Some had Facebook, many had Snapchat, and nearly all were on Instagram. When I asked one girl how many Snapchats she sends a day on average, I thought, “twenty or thirty max.” But wow, since getting a Snapchat seven months earlier, she had sent 24,000 snaps, averaging 114 per day. So much for actually communicating with people face to face.
We’re literally addicted
Studies show that spending time on social media can be as addictive as drugs – and I believe it. According to research by Dr. Delinah Hurwitz, a California State University at Northridge professor of psychology: “People become hooked [on social media] because endorphins, a chemical produced by the body that acts as a sedative, rush[] through th[e] person’s brain and body every time someone responds to their post.”
Me, Unplugged
This summer, I left New York to be a camp counselor at a camp in Piermont, NH. It’s in the middle of the White Mountains, twenty minutes from the nearest cell reception. And no Internet. My initial reaction? Oh crap. No texting. No emailing. No Facebook.
But guess what? I loved it. It was the greatest thing ever. I began to realize how I actually didn’t miss looking at random pictures and stupid (no offense, but they are) posts by people I kinda sorta might of once met at that party. Or the days sucked up by the Internet where somehow three hours just disappears and all I did was look at Facebook and troll useless Youtube videos.
What’d I do instead? I played Ultimate Frisbee and exchanged stories with friends. I put on ridiculous skits in the dining hall. I taught kids how to sail. I hiked mountains. I scared my campers with ghost stories. I played darts long into the night. I invented new games. All of this, without social media. It wasn’t virtual. It was real. And it was awesome.
My advice to parents? Don’t nag your kids about their constant tweeting or snapchatting. Don’t just tell them to get offline. That won’t work. What will? Give them ideas for what to do instead. Ride a bike. Go to camp. Rent a sailboat. Experience the world. And don’t put it on Facebook or Instagram. Tell your friends in person, when you get back.
|
0f396141e7ddbfcc271c62c51ae1682f | https://www.forbes.com/sites/kaipetainen/2011/03/20/wharton-wallops-stock-competition/ | Wharton Wallops Stock Competition | Wharton Wallops Stock Competition
Judges Listen to the Final Round
Imagine you could be a fly on the wall and watch the best undergrad investment clubs compete in a stock pitch competition. What stocks would they pitch? It's not THE Final Four, but each year 24 schools gather from around the country and come to the Ross School of Business for the annual Michigan Interactive Investments Undergraduate Investment Conference. Schools break out into 4 major groups and the judges choose the top stock from each group for the final round. Last year, the Ross School of Busines won the competition, but this year Wharton won the $3000 prize.
The conference was led by Lindsey Nurenberg, the Conference Director, and Arthur Wong, president of MII. Nurenberg noted:
The third annual Undergraduate Investment Conference was a phenomenal success. We saw an unprecedented level of quality in this year’s investment recommendations, with the winning team presenting a special situation post-bankruptcy opportunity. The presentation of SemGroup (SEMG) by the University of Pennsylvania capitalized on the company’s reorganization as well as likely conversion into an MLP structure. We additionally had an excellent line up of speakers, including Marc Lasry of the Avenue Capital Group as well as A. Rama Krishna of ARGA Investment Management. Our first and final round judging panels were comprised of industry professionals from firms including as Morningstar, Seneca Capital, Grosvenor Capital, Lake Capital, and William Blair. Overall, the conference brought out attractive investment ideas as well as provided an excellent opportunity for undergraduate finance students to network with each other and industry professionals. I am looking forward to seeing the success of this conference continue into its fourth year.
The final four stock pitches were:
Wharton -- University of Pennsylvania, Semgroup Corp (SEMG)
Stock pitch gurus: Timothy Liu, Matthew Martos, James Peng and Chi Song
Washington University in St. Louis, James River Coal Company (JRCC)
Stock pitch gurus: Philip Thomas, Alik Ulmasov, Zack Whitacre, Dirk Doebler
Dartmouth College, The Mosaic Company (MOS)
Stock pitch gurus: Kunal Arya, Pierre Guo
George Washington University, NetEase.com Inc. (NTES)
Stock pitch gurus: Jarren Smith, Andrew Pauker, Jonathan Cohen
I was a judge in the first round, and although they didn't make it into the final round, I felt that the group from Carnegie Mellon (RIO) and the Ross School of Business (KEM) did a great job as well.
Carnegie Mellon, Rio Tinto (RIO)
Stock pitch gurus: Kristin Carew, Daniel Griffith, Tian Wu
Ross School of Business, Kemet (KEM)
Stock pitch gurus: Alan Xie, Michael Hernandez, David Paolella
For a complete listing of the stocks pitched, take a look at 'Stock Pitch Madness at Michigan'.
Kai Petainen's views on the market and stocks are his alone, and do not reflect the views of the Ross School of Business or the University of Michigan. Kai holds RIO in his Material/Staples portfolio, and he holds KEM in his Info Tech/Telecom and Diversified Portfolios. KEM is also held by the Student Managed Fund @ the Ross School of Business. Kai is a MFolio master at Marketocracy and is featured in Matthew Schifrin’s book, The Warren Buffetts Next Door.
|
85603fdc36c377681807e66f54b0e70e | https://www.forbes.com/sites/kaipetainen/2011/10/13/ohio-wesleyan-stock-outperforms-in-michigan/ | Ohio (Wesleyan) Stock Outperforms in Michigan | Ohio (Wesleyan) Stock Outperforms in Michigan
Fall, Outside of the Ross School of Business -- photo by Kai Petainen
Earlier this year an undergraduate stock competition was held at the Ross School of Business and students competed for $3000 in cash at the Michigan Interactive Investments Undergraduate Investment Conference. This weekend the students will meet once again and pitch their ideas. Although students are not judged on the stock returns, looking back, what schools/stocks had the best returns?
(See previous articles, 'Wharton Wallops Stock Competition' and 'Stock Pitch Madness at Michigan')
In March, Wharton won the competition and their pitch of SemGroup Corporation (NASDAQ:SEMG) has struggled with a return of -35% (calculated from Mar. 18th to Oct. 12th).
Wharton wasn’t alone, as Washington University in St. Louis' James River Coal (NASDAQ:JRCC) fell -66% and Dartmouth College’s Mosaic (NYSE:MOS) fell -26%. Of the finalists, George Washington University is leading the way with NetEase.com (NASDAQ:NTES) and a return of -1%. For a comparison, the S&P 500 fell -5% and the S&P Midcap fell -11% during that same time period.
Perhaps some attention should be paid to those that weren’t in the finals, but had stellar returns. Ohio Wesleyan pitched Yamana Gold (NYSE:AUY) and it’s leading the way at 24%. Following Ohio, University of Toledo’s pitch of The Finish Line (NASDAQ:FINL) has 21% and NYU’s Ralcorp Holdings (NYSE:RAH) is tied with Syracuse’s Range Resources Corporation (NYSE:RRC) at 20%.
YChartsEmbed.chartCompany("AUY", } );
This weekend the stock pitch competition occurs once again, as it’ll be held in October instead of March. If the past could be a predictor of the future, this time I’ll keep a closer eye on the stocks chosen by Ohio Wesleyan, Toledo, NYU and Syracuse.
To the students – best of luck! And if you’re trying to get a few tips for stock pitches, check out the Back To School Stock Tips.
Wharton's pitch:
George Washington University pitch:
Kai Petainen's views on the market and stocks are his alone, and do not reflect the views of the Ross School of Business or the University of Michigan. Kai teaches a class on quant screening, F334 -- Applied Quant/Value Portfolio Management, at the Ross School of Business. Kai is a MFolio master at Marketocracy, and is featured in Matthew Schifrin’s book, The Warren Buffetts Next Door.
|
b8ac18b3332436d86f0550f423065f22 | https://www.forbes.com/sites/kaitlynmcinnis/2020/06/05/7-black-owned-hotels-across-the-us-to-visit-on-your-next-road-trip/ | 7 Black-Owned Hotels Across The U.S. To Visit On Your Next Road Trip | 7 Black-Owned Hotels Across The U.S. To Visit On Your Next Road Trip
The Ivy Hotel in Mount Vernon, Baltimore Jackson Photography
When it comes to accommodations, travelers have endless options to explore when planning a trip—from hostels and Airbnb to luxury hotels and family-run B&Bs, your choice of lodging can easily define your trip and being selective about where you choose to stay can have a big impact. Supporting Black-owned hotels is an easy way to help African American hospitality entrepreneurs flourish while encouraging more diversity in the greater hospitality industry.
According to Davonne Reaves, Hospitality Strategist at The Vonne Group, less than two percent of hotels across the country are currently owned by Black people. That said, Reaves is confident that number will continue to rise with the support of American and international travelers.
Looking to support diversity in the tourism industry? From cozy boutique hotels in Ohio to luxury spas in Virginia, below are a handful of Black-owned hotels and resorts to visit on your next stateside road trip.
The Ivy Hotel in Mount Vernon, Baltimore
The tea room at the Ivy Hotel in Mount Vernon, Baltimore Courtesy, The Ivy Hotel
Owned by Eddie and Sylvia Brown, this highly-rated luxury hotel is Maryland’s first and only Relais & Châteaux property. The property boasts 18 unique guest rooms (including eight suites and ten rooms), Magdalena, an award-winning bistro, and an intimate spa. Touted as a “highly-inclusive oasis” this five-star property includes a decadent breakfast at Magdalena (with items like fillet mignon with truffle hollandaise sauce), unlimited access to the self-serve Mansion Bar, and black car services at your beck and call.
MORE FOR YOUFebruary EU Travel Restrictions By Country: Quarantine, Covid-19 Tests And Vaccination PassportsShowdown Looms As Dem’s Gun Control Agenda Likely To Meet Stiff Opposition From Nation’s Sheriffs Who Say They Will Not Enforce Unconstitutional LawsThis Is The Luxury Cancun Resort Ted Cruz Is Missing Out On
The Copper Door B&B in Miami, Florida
Nestled in the heart of Historic Overtown in Miami, this intimate boutique bed & breakfast boasts 22 spacious rooms (including three private double room suites), chef-driven amenities and locally sourced experiences. Owned by Jamila Ross and Akino West, the Copper Door B&B is a beautiful result of the couple’s rich culinary and hospitality backgrounds.
Six Acres Bed & Breakfast in Cincinnati, Ohio
Built in the former safe house that was once part of the Underground Railroad, Six Acres Bed & Breakfast in Cincinnati is steeped in American history. The 1850s-era estate was originally used by Zebulon Strong, a Quaker and farmer who famously hid runaways in the bottom of his wagon while transporting them between safe houses along his route. Today, the beautiful home is owned by Kristin Kitchen, whose goal is to maintain the integrity of the home while teaching travelers about the African American experience in America.
La Maison Midtown in Houston, Texas
Despite its location in Houston’s Theater District, La Maison Midtown was inspired by the distinct architecture of New Orleans. The three-story property features seven uniquely-designed guest rooms with sprawling views of the downtown skyline. Overnight guests will also have access to a downstairs parlor and living room for a cozy at-home feel, as well as a daily Southern-style breakfast in the dining room.
Urban Cowboy B&B in Nashville, Tennessee
Charming, Southern-inspired rooms await at the Urban Cowboy B&B in Nashville. Featuring eight experiential suites all housed in a historic Victorian mansion, this elegant Southwestern oasis includes everything you’d need for a memorable night in—including claw-foot bathtubs, cozy reading nooks, and even a back stable house slinging innovative craft cocktails to guests and locals alike.
Salamander Resort & Spa in Middleburg, Virginia
Looking for a luxury resort and spa to drop your bags and relax? Head to the Salamander Resort & Spa in Middleburg, Virginia. Set on 340 acres at the foothills of the Blue Ridge Mountains, this luxury oasis features 168 rooms (including 17 suites), a 23,000-square-foot spa and a full-service equestrian center with a practice ring and a 22-stall stable for guests who are interested in exploring Virginia’s horse country between spa treatments.
Quintessentials Bed and Breakfast and Spa in Long Island, NY
Quintessentials Bed and Breakfast serves as the first Black-owned B&B with a full-service spa in North Fork. Owner Sylvia Daley grew up in Jamaica and has lived and worked on five continents, including working as a financial executive on Wall Street, before transitioning to inn keeper for Quintessentials—and her well-traveled background shines in Quintessentials features and amenities—including creative farm-to-table meals, an internationally-inspired spa menu, and more.
The Clevedale Inn in Spartanburg, South Carolina
Housed in a 1913 Colonial home in Spartanburg, South Carolina, the Clevedale Inn is a historic B&B owned by Paul Roberts Anthony and Pontheolla Mack Abernathy. Featuring stately columns, spacious porches, and cozy fireplaces, this highly rated inn is at the intersection of luxury and comfort. Featuring just four elegant guest rooms, travelers will have access to large ensuite bathrooms, complimentary Southern breakfast daily, a Roman bath, greenhouse, and gorgeous piazza for weddings and special occasions.
|
b786fd96741a21921d416af3c945c558 | https://www.forbes.com/sites/kaitlynmcinnis/2020/06/25/the-biggest-little-city-in-the-world-was-just-named-the-best-small-city-in-america/?sh=13a52e462133 | The ‘Biggest Little City In The World’ Was Just Named The Best Small City In America | The ‘Biggest Little City In The World’ Was Just Named The Best Small City In America
Reno, Nevada, The Biggest Little City In The World. (Photo by Education Images/Universal Images ... [+] Group via Getty Images) Universal Images Group via Getty Images
Reno, Nevada was just ranked the #1 best small city in the U.S. by BestCities.org, the world's most comprehensive city ranking. Previously sitting at number six, Reno this year jumped to first place, edging out other little cities, including Naples, Florida; Santa Fe, New Mexico; and Savannah, Georgia.
The latest edition of Resonance Consultancy’s annual small city ranking, released on Tuesday, rates American cities with populations between 100,000 and 500,000, with “little city” Reno earning top spot due to its ‘natural assets and growing infrastructure.’
The study uses a combination of statistical performance and qualitative evaluations by locals and visitors to rank the factors in play, including natural environment, airport connectivity, sports teams, nightlife, attractions and the city’s overall infrastructure.
Situated just seven hours outside of Las Vegas, the neon-lit casino town offers a whole lot more than just betting and gambling. In fact, the “Biggest Little City in the World” garnered international recognition by way of renowned art and self-reliance festival Burning Man which has quickly translated into a destination for art, culture, and tech innovation.
Stimulated by the influx of big-name tech—including Tesla and Google—the “little city” pushed its way to top spot as it’s in the midst of a $1 billion transformation that has continued drawing in young talent from both the arts and technology industries.
MORE FOR YOUThe Future Of Burning Man Emerges At Fly Ranch, An Outrageous New World In The Black Rock DesertJapan Asks China For Anal Swab Covid-19 Test Exemption For TravelersMarch U.S. Travel Restrictions By State–Quarantine And Covid-19 Test Requirements
Despite its small size (Reno is home to approximately 225,221 permanent residents), the city has a distinctly cool, big city vibe with dozens of micro-breweries, Brooklyn-style restaurants and patios, and a burgeoning arts scene that is largely responsible for curating and creating pieces for Burning Man’s Black Rock City.
Nature fiends can also take advantage of the neighboring Sierra Nevada Mountain range as well as Lake Tahoe, the largest alpine lake in North America, where travelers and locals alike enjoy natural activities like boating and parasailing above the incredible turquoise waters or hiking the looming Tahoe Rim Trail.
While the state of Nevada is presently in the process of reopening its hotels, casinos, and outdoor attractions, the country’s newest favorite “little” destination makes for an incredibly underrated alternative to the bright lights of Las Vegas for anyone looking to pencil in the best of nature-focused and city adventure into the itinerary.
|
0e972feb6678c05a1accf75f1d6b5f41 | https://www.forbes.com/sites/kaitlynmcinnis/2021/01/27/these-are-the-most-popular-souvenirs-from-every-country-study-finds/ | These Are The Most Popular Souvenirs From Every Country, Study Finds | These Are The Most Popular Souvenirs From Every Country, Study Finds
getty
With international travel and tourism largely frowned upon right now, no one really knows when their next trip abroad may be—but that doesn’t mean we can’t reflect on previous trips and plan for our next adventure.
A new study by French travel and tourism operator Club Med has just revealed the top selling souvenirs from around the world; an initiative put in place to help grounded travelers relive their favorite holiday memories (and purchases!).
The survey involved looking at the top souvenirs from each country around the world, compiling a list to discover which souvenir is most popular for tourists to bring home from each country—but if there was not a clear souvenir to attribute then the country was left out.
North America’s most popular souvenirs are pretty much exactly what you’d expect: the top souvenir from Canada is maple syrup, while sugar sweets are the most loved from the United States. You might have thought tequila or mezcal would be the most popular souvenir to come out of Mexico, but in fact, it’s a classic lucha libre wrestling mask.
The most popular souvenirs in Asia are much more of a mixed bag. Travelers visiting Japan bring home origami paper, while those visiting Indonesia bring back coffee. In the Philippines, the most popular souvenir is a toy jeepney—the quirky public transit busses that run through the city. In Thailand, the most coveted souvenir is a stylish pair of harem pants.
MORE FOR YOUHow A Nazi Symbol At CPAC Turned Into A Massive Hyatt Public Relations DisasterHow Donald Trump’s Washington, D.C. Hotel Feeds QAnon’s March 4 ConspiracyFAA Can’t Explain Pilot’s UFO Sighting Last Weekend Over New Mexico
Club Med
In Europe, on the other hand, souvenirs are largely snacks, alcohol, and accessories. In the United Kingdom, the most popular souvenir is, perhaps by chance, an umbrella; likely picked up out of necessity during an unexpected downpour. In France, it’s not red wine or Eiffel Tower statues, but felt beret hats, while in Germany and Cyprus, it’s beer and wine, respectively.
Interested in finding out every single souvenir? Be sure to check out the full study at the official Club Med website.
|
300ba77c7c9ca8b24f61a742eae55e09 | https://www.forbes.com/sites/kaleighmoore/2019/05/29/candice-pool-neistats-unconventional-approach-to-direct-to-consumer-retail/ | Candice Pool Neistat's Unconventional Approach To Direct-To-Consumer Retail | Candice Pool Neistat's Unconventional Approach To Direct-To-Consumer Retail
Candice Pool Neistat sends handwritten Valentine's Day postcards to customers. Love Billy!
When you visit the online store for Love Billy!, you may assume it’s just another direct-to-consumer brand selling a small, curated collection of jewelry and clothing items as well as a few simple accessories and home goods.
What you can’t decipher from the brand’s minimal online presence is the fact that behind the scenes, it’s a retail powerhouse.
The numbers don’t lie on this front. Over the course of just two months before its official launch, more than 45,000 people signed up for the brand’s updates and launch announcements. Post-launch, Love Billy! orders totaled more than $100,000 within the very first month—even with an extremely limited product line featuring less than five items available for purchase, most of which cost under $100.
So what’s the driving force behind this remarkable demand and brand performance? It has a lot to do with the brand’s female founder, who has her own way of doing things when it comes to running a retail operation.
Candice Pool Neistat, who also owns Finn Jewelry, founded Love Billy! in 2017 as a creative outlet. Today, two years in, her company has grown and now reaches an international audience, boasting customers in 72 countries around the world.
I was curious about her unique approach to retail, which she proudly shares across the brand’s social media channels, so I interviewed her to hear her thoughts on everything from strategy, to supply chain management, operations, and beyond. Here’s what she had to say.
Kaleigh Moore: What does your retail strategy look like for the brand?
CANDICE POOL NEISTAT: Everything is very do-it-yourself and non-conventional from a traditional retail perspective. Most of the brand’s direction comes from my personal voice and decision-making style, which often goes against norms.
I take this approach for a couple of reasons. One: I'm a suspicious consumer. I studied advertising and I know when what I see is the fine work of a team of people trying to make a brand forge a connection with a consumer. That’s conventional marketing. It's not genuine. It's often eye-roll inducing.
Two: I'm highly insecure and pretty self-deprecating. I cannot have a curated Instagram feed with styled images and color palettes. To me that feels fake, and I’m terrified of being thought of that way. Instead, I keep things honest and always put my personal fingerprint on the brand. Sometimes that means what we share is sort of nerdy, sometimes it’s rude. But it’s honest.
Being unconventional and flawed is what attracts people to the Love Billy! brand. We have binders full of letters from customers saying how much it inspires them. I don’t consider myself an inspirational person, but it’s clear that customers are dying for someone normal (and maybe a little broken) to connect with.
KM: What’s the thinking behind the brand’s limited product runs? What are the results you're seeing from the exclusivity of those short-run items?
CPN: Our limited-run items sell out in a few hours. Our ‘Series’ sweatshirts are a good example of this. These are a limited run in regard to both colorway and number. We start with a color, then we have ‘Series A’, which is a run of 50.
Each sweatshirt is embroidered with that information, so it might say ‘Series A: 1/50’ as an example. Once Series A is sold out, we do another limited run called Series B, which is a slightly larger run of 100, and then a Series C, which is a 200-count run. It's fun because we get direct messages from customers who are excited about what number they received.
We also regularly retire products. There isn't a formula for it; I solely go on whether I'm tired of it or not. If it's been around a while I get bored, and so I decide it needs to take a “nap.”
KM: From a supply chain perspective, do limited product runs make things easier for you to manage because you're not dealing with restocking inventory?
CPN: Limiting stock is the best thing we do in terms of streamlining production. There is a beginning and an end for each cycle, which means we can monitor inventory more easily. When we are down to our last pieces, it's a binary decision: Do we make more, or do we move on?
We don't contend with that fine balance of how many of each size and style to keep on hand, what is our sell-through speed, floating all the costs of sitting inventory, how do we not run out, etc.
There have been times where we’ve decided to make more or re-stock items, but there’s still a delay in shipping those orders. It's okay, though. Our clients are very understanding, and we always send a special “Thanks for being so patient" gift with any order that is delayed. It helps that they understand we are human.
KM: How do you manage everything? What does your operation look like?
CPN: I have an amazing team. We’re small: There are five women on my team plus seasonal interns, but I like it that way because too many employees means I'm just managing employees (and I am a terrible manager.)
My other company, Finn Jewelry, shares a space with the Love Billy! team. For 11 years now I’ve been based out of the same NoHo loft in New York. As we grow, I've let go of some big parts of micromanaging: Things like accounting and bookkeeping, marketing (not the creative, but other parts), web design, and legal.
We are also just about ready to unload the majority of our soft good production and fulfillment. That will be a big step because currently we are hands-on with all vendors and we handle all our own fulfilment and customer service, which is very labor-intensive.
KM: Has anything you've experimented with marketing-wise flopped or been surprisingly successful?
CPN: One thing people might find interesting: Gifting jewelry to celebrities is cute and nice, but it does not bump our numbers in the slightest. Ever. We’ve had a lot of big names and people with millions of followers feature our products in social posts, but it just doesn’t move the needle.
What really performs best for us are actually the posts that feature pictures of me goofing around behind the scenes. On social media, especially on our Instagram, those posts always get the most engagement.
KM: Do you do anything special for your VIP customers or most loyal fans?
CPN: First I need to say that I don’t send people free stuff in hopes that it’ll spark some sort of reciprocity. In my experience, that never works.
We determine our VIPs by dollar amount spent over a 12-month period, as well as lifetime order quantities. Those customers get first notice of events, releases, or retiring pieces. We’ve also done things like sending Mother’s Day packages with ‘Mommy’ T-shirts and matching t-shirts for their kids, plus a bottle of wine called “Family Time is Hard." For Valentine’s Day each year I send over 200 handwritten postcards to our biggest fans. People really love those.
In life, you give a gift to make someone happy. It makes you feel good. For my business, I take the same approach. For our VIPs, everything I give or offer is for them to enjoy...not to feel pressure to buy anything in return. That immediate payoff is not monetary; it's emotional. It's appreciation. They are in the fold.
KM: What’s ahead for the brand?
CPN: We’re going to launch a new CBD salve, but other than that, we’ll probably just keep doing what we’re doing, iterating on the items we already have. I don’t make a lot of decisions based on data. I just do what feels right and what interests me at the time. So far, customers seem to be responding to that.
|
8166b889078d8c6f61724c1395a1d618 | https://www.forbes.com/sites/kaleighmoore/2019/06/05/report-shows-customers-want-responsible-fashion-but-dont-want-to-pay-for-it/?sh=21de1aa11782 | Report Shows Customers Want Responsible Fashion, But Don’t Want To Pay For It. What Should Brands Do? | Report Shows Customers Want Responsible Fashion, But Don’t Want To Pay For It. What Should Brands Do?
While shoppers like the idea of sustainable fashion practices, they don't always want to pay for the ... [+] costs associated with them. Photo credit: With Jéan
A new report from e-commerce personalization platform Nosto showed that of 2,000 U.S. and U.K.-based shoppers surveyed, sustainable practices and fair wages for workers were top consumer demands for modern fashion retailers.
It also showed, however, that shoppers don't always want to pay for the extra costs associated with them.
The report revealed that while 52% of consumers do want the fashion industry to follow more sustainable practices, only 29% of consumers would pay more for sustainably-made versions of the same items. Additionally, 62% of consumers would like to receive discounts on sustainable clothing items.
There appears to be a disconnect between the idea of sustainable fashion and where consumers, especially of younger generations, actually spend their money.
Data from a report by ecological certification company Oeko-Tex illustrated that while that 69% of Millennials say they look into claims of sustainability and eco-friendliness when researching clothing purchases, only 37% actually bought clothes from brands with that focus.
With this in mind, it’s no surprise a new report showed sustainability efforts in the fashion industry are slowing down as a whole. However, some clothing retailers have discovered a more sustainable approach actually helps solve business problems, albeit with some trade-offs.
Clothing brand Christy Dawn is pushing ahead with eco-friendly efforts and ethical labor practices, undeterred by the implications. For the Los Angeles-based clothing company, which creates short-run editions of women’s apparel items from deadstock fabrics, it was a rocky start with quality control issues that made them lean fully into responsible fashion.
MORE FOR YOUWhat Estée Lauder’s Acquisition Means For Deciem And The OrdinaryNordstrom Fortifies Its Brand By Ceding Control To Its VendorsInstacart May Emerge As The White Knight For Grocery Retailers
“When we were first getting started, we learned quickly you get what you pay for,” said Aras Baskauskas, Christy Dawn’s cofounder. He explained that when they initially tested production with a lower-cost team, the product suffered. In the end, they decided to pay about 35% more for a small team of four who were more expensive, but delivered much higher quality products.
Baskauskas went on to say that since then, the team of four has grown to a team of about 30, which works on-site at the company’s headquarters in downtown Los Angeles. The production team has a clean and safe working environment, is paid fair wages, gets paid vacation time, and is provided healthcare benefits.
The only downside of these efforts: Margins are smaller. For Christy Dawn, this is a reality they’ve accepted and now embrace. With their average cost per item averaging $80, their 3X markup puts them in the $220 retail pricing range. “Many fashion retailers strive for 8X margins, but we’ve made a conscious decision to run an ethical business and to keep our costs lower for customers,” Baskauskas said.
While the use of deadstock fabrics means they do miss out on economies of scale, Baskauskas explained this approach also helps them stay agile and able to release new, limited-run products every week.
Other fashion brands experiencing rapid growth have different hurdles to face as they work to maintain ethical labor and sustainability practices.
With Jéan, a two-year-old Australian fashion brand with a Bali-based production team, found itself growing from 400 units sold to 14,000 in just a few weeks.
With the rapid spike in growth, cofounders Evangeline Titilas and Sami Lorking-Tanner knew their production model needed to evolve quickly. While their small team in Bali included a few team members working from the comfort of their own homes, they now needed a central hub where a larger production team could work together as a group for the execution of fabric processing, sewing, quality control, and packaging.
In a few months, they were able to open a new worksite in Bali and to grow the production team to 60 employees, all of which are paid above-average wages. The two 26-year-old female founders now visit every other month for four weeks at a time to work alongside the team.
So how have they managed to scale so quickly and stay devoted to sustainable and ethical practices? A smart sales strategy has helped. As a direct-to-consumer brand, they roll out four small product ranges per year, but focus mostly on their perennial, best-selling styles.
“It's an extremely effective model that breaks from the traditional model most fashion labels use, which includes creating large collections throughout the year and then selling/distributing and discounting at the end of the product life cycle,” said Titilas. “For us, we only introduce small, trans-seasonal ranges. We're breaking away from the linear traditional fashion system of take, make, and dispose.”
The strategy is working so far: The brand often sells out of the perennial styles and has had a wait list of over 1,000 people who’ve signed up for updates on restocking of a single product.
Sustainable approaches are working in these two use cases, but we still have to consider what this means for the industry as a whole. What are fashion retailers to do? Should they rise to meet these consumer demands and risk the gamble of customers not buying into sustainable practices, or should they continue to conduct business as usual?
Fashion consultant Nicole Giordano of StartUp Fashion recommends that when considering an approach to more eco-friendly efforts, brands should first zero in on the values they hold as a company. From there, they can find ways to implement sustainability related to their core mission rather than trying to accomplish all of their goals at once.
“You can always add more components of sustainability as you go, but if brands try to do too much from the start, they run the risk of hitting too many obstacles and giving up on their businesses...especially if they’re self-funded,” she said.
Progress, be it large or small, is good news when it comes to sustainable and ethical practices in the fashion world.
|
563ebb2dc12a6db57a221c407cf4b09c | https://www.forbes.com/sites/kaleighmoore/2019/08/07/how-10-grove-aims-to-disrupt-the-direct-to-consumer-bedding-market/ | How 10 Grove Aims To Disrupt The Direct-To-Consumer Bedding Market | How 10 Grove Aims To Disrupt The Direct-To-Consumer Bedding Market
10 Grove Founder Rana Argenio is taking on the world of direct-to-consumer luxury bedding. 10 Grove
Products in the world of luxury bedding usually come with a hefty price tag, often falling in the $500-plus range. The cost is high because of the supply chain.
With a product development and manufacturing process that can take up to two years to get raw materials into finished goods at scale (think of the work-intensive processes like milling European materials, sewing, packaging, etc. that happen all around the globe), the high cost of these products isn’t all that shocking.
However, with the rise of direct-to-consumer business, bedding brands are now finding ways to not only circumvent some of the high-cost supply chain issues, but to pass the savings along to the consumer.
10 Grove is one of these brands. Helmed by Wharton graduate Rana Argenio, this new direct-to-consumer luxury bedding company has developed a business model that is fairly straightforward: Product cost is the sum of the raw materials (fabrics, threads, packaging) and the labor to manufacture. 10 Grove sources directly from fabric mills and manufactures in-house so there are no intermediaries that add to their cost.
The brand soft launched in February 2019 for a short period of customer feedback and then officially launched in late June. So far, they’re finding traction: Re-order rates hover around 15% and they’re seeing 65% month-over-month growth since the June launch.
The specifics around their vertically integrated supply chain are simple: Fabric is milled in Italy, and it’s then shipped to their manufacturing facility in Texas where they handle warehousing, production, and distribution. Because they operate on a just-in-time basis, most of their inventory is stored as raw materials with a base level of their 368 SKUs in stock. This also means they don’t over-produce.
MORE FOR YOUWhy Amazon Should Be Worried About Walmart’s Micro-Fulfillment CentersBirkenstock Billionaires Unveiled After German Sandal Maker Agrees To Sell In A $4.7 Billion DealLord & Taylor Locks Its Doors For The Last Time, After 195 Years
As a result, they’re able to pass on a savings of about 60% to the consumer. It’s an impressive feat when you consider that even in the world of direct-to-consumer bedding where the middleman is removed, consumers still often pay for inefficiencies in the supply chain, sometimes paying a markup of as much as seven to nine times.
“I believe that people deserve access to better quality products, made in a more efficient way, so that they’re getting what they actually paid for,” Argenio said.
This idea was the foundation of Argenio’s drive to launch 10 Grove. After obtaining a graduate degree from INSEAD, she put her business background to work by joining the fifth-generation family business in the world of luxury textiles.
As part of her research, she made sure to get hands-on experience with every part of the production process, from spinning the yarn to weaving the fabric and designing collections to the manufacturing and navigating the traditional wholesale supply chain.
“During these four years on the wholesale and manufacturing side of the industry, I kept a close eye on the proliferation of online bedding brands, quickly realizing that the term direct-to-consumer had become massively diluted, if not entirely misused,” she said.
“I was dismayed to see consumers being seduced by stylish branding into buying mediocre products produced using the same low-market quality, third party middlemen and high cost markups as their incumbent retail equivalents.”
Argenio’s not the only one dissatisfied with direct-to-consumer brands’ claim to save consumers money. Manufacturing expert and Italic Founder Jeremy Cai echoed the skepticism around the claims that direct-to-consumer retailers tout around reducing costs associated with a middle man.
“I’m a little wary of that story from the vast majority of DTC brands who claim this but are often charging the same prices as their legacy counterparts,” he said.
As a result, 10 Grove shoppers can expect to spend a fraction of what they’d traditionally have to invest in luxury bedding items: The brand’s items range from $140 for a basic sheet and pillow cases to $430 for an entire luxury bedding kit (which would traditionally retail for about three times that price for similar materials and quality.)
An added value for shoppers of 10 Grove: Products purchased from the brand are an investment in ethical labor practices. The company only manufactures between 7:30 a.m. and 3:30 p.m. so that their seamstresses are able to be home with their children outside of school hours, and the 25-person production team is paid above minimum wage, receives paid vacation, and has access to health care and retirement benefits.
“We continuously reinvest in our workforce and offer training opportunities to diversify their skillset,” Argenio said. “We also offer interest-free short-term loans which are available for their personal needs. We believe an employer-employee relationship is two ways and relies on trust and mutual understanding,” she said.
As the world of direct-to-consumer bedding continues to grow and diversify with new players, 10 Grove makes an interesting addition to the mix. While their history as a brand is short, it will be interesting to see if their rapid growth trajectory sustains in the coming years.
|
3b75c5dc74eff1c5ee7558dc8d15f249 | https://www.forbes.com/sites/kaleighmoore/2019/08/08/with-nuulys-launch-the-clothing-rental-service-market-continues-to-heat-up/ | With Nuuly's Launch, The Clothing Rental Service Market Continues To Heat Up | With Nuuly's Launch, The Clothing Rental Service Market Continues To Heat Up
Nuuly has officially launched as part of the clothing rental ecosystem. Nuuly
The clothing rental trend continues to grow, following in the footsteps of early leaders like Rent the Runway and StitchFix.
On July 30, URBN officially announced the launch of Nuuly, its monthly clothing subscription service offering for women. Along with more than 100 third-party brands, vintage items, and products within the brand’s existing portfolio, their rental service offers customers the chance to borrow six pieces of clothing per month for $88.
First referenced in May of this year during the company’s record-making first-quarter sales report, Nuuly speaks to shoppers who value access over ownership, as subscribers get access to over $800 worth of URBN’s retail merchandise for a tenth of the price.
"When we looked at how the retail landscape was changing, we felt that a subscription rental business really tapped into some key customer desires: a desire for newness that exists alongside a desire to be more sustainable and to be smart about how she spends her money," said Kim Gallagher, Nuuly's Director of Marketing and Customer Success.
But Nuuly’s not the only new clothing rental offering in the marketplace.
American Eagle offers its Style Drop program that includes three items at a time for $49.95 per month. Haverdash also allows renters to borrow three pieces at a time for a slightly higher fee of $59 per month, but includes boutique labels like BB Dakota and Cupcakes and Cashmere. There’s Le Tote. Gwynnie Bee. The Mr. & Ms. Collection.
There are variations on the theme as well: Prime Wardrobe from Amazon, for example, lets buyers choose up to eight items to try out for seven days. From there, the customer can send back the items they don’t want and keep the ones they’d like to purchase.
MORE FOR YOULord & Taylor Locks Its Doors For The Last Time, After 195 YearsGen Z Is Driving Demand For Cottagecore GoodsTarget May Be Set To Debut A New ‘Store Of The Future’
Amazon also recently announced the addition of Personal Shopper to accompany this offering, which is essentially a styling service for Prime members that can make tailor-made product recommendations, further personalizing the experience.
As more consumers look to invest in more sustainable alternatives to fast fashion, this type of offering does seem to hit the mark. Some experts from the fashion industry feel that subscription clothing rental isn’t just a passing fad, either.
Jacqueline Oak, the former Director of Retail Strategy for M.M. LaFleur, said that she’s seeing a trend towards what she calls a “singularity of closet”, meaning what women wear on weekends now blends with their work wardrobes. As such, she’s seeing consumers become more thoughtful about what they invest in when it comes to fashion purchases.
“I've heard customers in showrooms talk about ‘price per wear’ or whether an item ‘sparks joy’,” Oak said. “Additionally, the growth in secondary market apps like Poshmark show that women are comfortable buying well-preserved second-hand clothing.”
Oak also sees real advantages on the retail side of the operation as well. “From a business perspective, a rental model helps brands get a higher ROI on slow-moving SKUs. However, the business usually needs to invest more in operational infrastructure, primarily dry cleaning and faster shipping,” she explained.
Her personal belief is that the strongest clothing rental programs are one with a styling component and a lease-to-own option, which adds a level of personalization and additional value.
Caitlin Strandberg, Principal at venture capital firm Lerer Hippeau, feels similarly. She believes we’ll see more brands looking to clothing-as-a-service as well as technology that enables retailers to offer rentals at scale.
The reason: It’s a rich source of consumer data as well as regular, recurring revenue.
“Rentals and subscriptions offer a new channel to not only acquire customers but to also establish a regular cadence of authentic marketing communication,” she said.
“This model solicits real-time and frequent feedback from the customer on anything from product selection, style/fit, and willingness to purchase at a discount—all of which informs future product development and marketing strategy.”
But what do shoppers think of these new clothing rental services? Allie Lehman, a 32-year-old shopper from Columbus, Ohio shared her insight with me around her first Nuuly order.
“I was impressed with the selection and was pleasantly surprised I was able to rent a dress I was thinking of purchasing this season,” she said. “The items I received were high quality, everything fit, and the entire online experience was fun. I'm really excited to be able to rent specific pieces for the month with the option to buy.”
She went on to say she liked the communication from Nuuly through their email and text notifications and that she appreciated how the item was thoughtfully packaged and the extras that were included, like a “clothing mishap kit.”
While it’s still too early to tell if these subscription clothing rental services will pan out as financially lucrative, sustainable revenue streams for the brands who are launching them, it will be interesting to watch and see the rate of consumer adoption and to keep an eye on these brands’ quarterly sales reporting.
|
e0d62b8d7f22f3ddbc95bb1757693f97 | https://www.forbes.com/sites/kaleighmoore/2019/10/02/from-kickstarter-to-rei-how-one-brand-transitioned-from-crowdfunding-to-wholesale-partnerships/?sh=1882a9546699 | From Kickstarter To REI: How One Brand Transitioned From Crowdfunding To Wholesale Partnerships | From Kickstarter To REI: How One Brand Transitioned From Crowdfunding To Wholesale Partnerships
Rumpl started as a Kickstarter-funded project, but has since transitioned expanded into a brand with ... [+] retail distribution and wholesale partnerships. Rumpl
When environmental designer Wylie Robinson woke up one morning during an overnight ski trip in California’s Sierra Mountains, he found himself stranded with a car that wouldn’t start.
It was freezing outside, and there was no one around. All he could do was bundle up in his sleeping bag and wait for someone to come and help him restart the car.
While the experience was unnerving, there was an upside—it sparked a product idea.
Robinson wondered why there wasn’t a blanket that was as technical, warm, and comfortable as a sleeping bag—but with greater portability and functionality.
“I found myself thinking: Why are technical materials sanctioned for outdoor use when we can so easily incorporate them into products we use everyday?” Robinson said.
As he thought about how blankets are one of the oldest textile constructions on earth, he was curious as to why innovation had largely skipped over this particular product category. He wanted to create something new that would add value through material innovation, creative storytelling, and trend-forward aesthetics.
As a result, Robinson got to work prototyping what he called the “sleeping bag blanket”, which eventually turned into a Kickstarter campaign to prove market validation and drum up some initial funding to manufacture the product.
This proved to be a smart strategy, as the campaign goal of $200,000 was exceeded with more than 1,600 people contributing a total of $216,889 to bring the product to life.
With funding and clear consumer interest, he and his team got to work and brought the first batch of Rumpl blankets to life.
MORE FOR YOULord & Taylor Locks Its Doors For The Last Time, After 195 YearsDillard’s Clearance Centers Bring Bargains And Impending Vacancies To Troubled Shopping Malls2021 Will Be The ‘Year Of The Yard,’ As More Americans Plan To Improve Outdoor Spaces
After the Kickstarter orders were filled, Robinson was ready to look to the next step: Expanding into wholesale partnerships with retailers.
Having a tactile product, Robinson knew that having a physical retail presence in outdoor stores like REI wherein customers could touch and feel the items would be a major benefit. A presence in trusted retail spaces would also act as an endorsement for their brand, further validating their product.
But transitioning from a Kickstarter product to a company that could service wholesale demand wasn’t an easy task.
“After finding success on Kickstarter, it’s been a constant exercise of navigating growth,” Robinson said. “We have an incredible team and great advisors who help push us forward, but we learned quickly that the world of wholesaling was very different from launching on Kickstarter.”
Robinson went on to explain that while the Kickstarter environment thrived on a short timeline from prototype to sale with little up-front capital investment, servicing wholesale demand required the company to have product queued up 12 months in advance so that buyers could make their purchasing decisions early.
This also meant that all product specifications needed ironed out well in advance, as changes couldn’t be made once a buyer placed an order.
The challenges Rumpl faced during this transition weren’t unique—other brands with products launched via crowdfunding on Kickstarter have faced similar obstacles when making the switch to larger-scale distribution.
Shivani Jain, the co-founder of Cubii, a compact elliptical that also launched on Kickstarter, explained how managing cash flow was crucial as she transitioned from the platform into wholesale and omni-channel distribution.
She also referenced that getting on a retailer’s shelf isn’t always enough anymore—you need product to get off the shelf, too.
“While different sales channels might promise new eyeballs, ultimately it’s the brand's responsibility to generate awareness and spread the word about the products so people are looking out for them,” she said.
In Rumpl’s case, they’re working hard to accomplish that. One way they’re doing this is by leaning into increased sustainability efforts as the company grows (and sharing that journey with consumers.)
They’re making progress: In 2019, Rumpl reclaimed more than three million plastic bottles from landfills after reimagining products with post-consumer plastics in lieu of virgin materials. The brand also joined the Climate Neutral Coalition to focus on offsetting its carbon footprint in 2020 and beyond.
As they continue to expand their wholesale distribution, Rumpl’s long-term, big-picture vision is to breathe fresh life into the largely overlooked product category they’ve entered through material innovation and trend-forward design–much in the way Stance has with socks.
The hope is that further growth and expansion into additional wholesale partnerships will help them reach that goal.
|
25ee2669643e1bbe8354bac31cb06798 | https://www.forbes.com/sites/kaleighmoore/2019/10/15/how-boosted-taps-into-the-growing-trend-of-event-based-activations-to-mobilize-customers/?ss=cio-network | How Boosted Taps Into Event-Based Activations To Mobilize Customers | How Boosted Taps Into Event-Based Activations To Mobilize Customers
Boosted uses events to blur the lines between online and offline communities. Boosted
Modern retailers beyond the fashion vertical are discovering that it’s not enough to create and launch a product in today’s world.
In order to foster a sense of long-term loyalty with customers, they too need to build communities around their products and engage their customer bases with experiences that deepen connections to the brand.
In fact, 95% of marketers surveyed in 2018 said events provide attendees with a valuable opportunity to form in-person connections with a brand in an increasingly digital world.
So while clothing brands like Outdoor Voices and Madhappy have already put this into practice, retailers outside this industry are now hopping on board with the trend, too.
One way they’re doing this is through events and activations that unite online and offline communities with hands-on, real-life activities.
Boosted is one company putting this practice to work in inventive ways.
The brand, known for its last-mile transportation products like the electric skateboard called the Boosted Board, is leveraging its active online community and driving those online users to offline events, thus getting them outdoors and using their products.
The first way they did this was through a scavenger hunt. The hunt began with an unpromoted landing page that featured six different latitude and longitude coordinates, the first of which was locked. The remaining five coordinates were ticking.
Boosted fans quickly caught on that there was something happening at these locations—and word started to spread, both online and off.
Every other day, Boosted unlocked another set of coordinates, alternating between clues in their social media postings (like coordinates hidden in an image posted on the brand’s Instagram) and on the landing page itself, which drove cross-channel engagement for the brand as users scoured brand assets for clues.
“It was interesting to see how our different communities started talking about this campaign in spaces like Reddit forums,” said Noriko Morimoto, Boosted’s chief marketing officer.
“In the beginning, these conversations were sparked by curiosity and lead to alliances forming as people hunted for clues, but as time went on, things got more competitive.”
There was so much interest in the scavenger hunt that by the time the final coordinate was to be released, Boosted ended up having to change the final location at the last minute so the brand representative didn’t get overwhelmed by the crowd of participants.
In the end, the person who arrived at the final coordinate first won a free Boosted Board—and Boosted earned its first Webby Award for this creative activation.
The results were impressive as well: Reviewing the impact of this event, Boosted discovered that not only did the hunt drive a 15% lift in website traffic, but it garnered more than 250,000 video views with a 3.8% engagement rate as well as 1.77 million social photo impressions with a 6.6% engagement rate.
Seeing the impact and potential of this first event-based activation, the Boosted team decided to brainstorm on ways to improve upon its next community-building event down the road.
Moving forward, they wanted to leverage the competitive spirit of their communities located in different cities across the U.S.
This lead to their next iteration of event-based marketing, which was the City Showdown. The event pitted Boosted riders across different U.S. cities against each other.
Cities like Los Angeles and Chicago faced off to see who could earn the most points, which were earned by riders recording individual and group rides within the company’s app. Each week, a city was crowned a winner based on points and earned the privilege of bragging rights.
The City Showdown allowed Boosted to capitalize on group ride behaviors, led to competitive riders sharing their efforts across their own social channels (thus organically increasing brand awareness), and allowed the brand to tap into natural online and offline behaviors, again blurring the lines between the two environments.
Overall, the City Showdown performed even better than the scavenger hunt, driving a 20% lift in site traffic, another million photo impressions, and more than 250,000 video views.
So what can we learn from Boosted’s success with offline events?
Creative activations are a major opportunity for retailers to build active and engaged brand communities.
Chris Echevarria, CEO and founder of Blackstock & Weber, agrees. He feels that event-based activations can help foster stronger customer relationships for any type of retailer—when they’re executed well.
He also noted that events should be viewed as a positioning tool that promotes the company as a go-to resource for relevant, interesting activities.
“You're building your brand up as an arbiter of taste, and once that happens, you'll have fans for the long-term,” he said.
|
f908def8bae104379d479c407e45c326 | https://www.forbes.com/sites/kaleighmoore/2019/11/11/fine-jewelry-brand-mejuri-to-introduce-hybrid-showroom-model/ | Fine Jewelry Brand Mejuri To Introduce Hybrid Showroom Model | Fine Jewelry Brand Mejuri To Introduce Hybrid Showroom Model
Shoppers at Mejuri showrooms will soon be able to purchase items in-store. Mejuri
Direct-to-consumer fine jewelry brand Mejuri has come a long way since its 2015 launch.
Aside from announcing it received a $23 million in Series B funding earlier this year, the brand is further expanding its physical retail footprint within its three showrooms located in Toronto, New York, and Los Angeles where customers can get hands-on with product.
Up until now, inventory of Mejuri’s in-store products (which range from $29 for small hoop earrings to $2150 for bigger ticket items like engagement rings), were not stocked on-site. Instead, shoppers could only purchase product by placing an order that was then filled and shipped within one to two days.
However, the brand is now rolling out a new hybrid showroom model, The Mejuri Studio, that allows customers to walk out of the store with product in hand—all while staying agile and without a surplus of inventory on-site.
This model allows the brand to respond to customer feedback in real-time, and they plan to launch this program ahead of the holiday shopping season.
I spoke to Noura Sakkijha, Mejuri’s Co-Founder and CEO, to learn more about the why behind this transition as well as the nuts and bolts that will make this new approach work. Here’s what she had to say.
KALEIGH MOORE: What initially made you reconsider your approach to order fulfillment within your physical retail spaces?
NOURA SAKKIJHA: We opened our first showroom in July 2018, and the model was based on next-day shipping without the option to walk out with product. For the last two months, however, we’ve introduced a pilot program in our Toronto showroom with inventory available on-site, and as a result, we saw a 35% increase in conversion rate.
MORE FOR YOULord & Taylor Locks Its Doors For The Last Time, After 195 YearsWhy Amazon Should Be Worried About Walmart’s Micro-Fulfillment CentersAmazon Bets On Britain As Amazon Go Grocery Stores Set To Launch In London This Week
We originally began with our ship-to-you model to keep our retail models lean, envisioning retail would simply be an awareness channel for a brand that has such an active online community as core to its business. However, physical retail has allowed us to continue that strong dialogue with our community offline. When it comes to our retail locations, we look at optimizing them in much the same way you’d optimize a website: We look at conversion rate, but we’re also interested in how we can further build our community.
For example: We looked at how our consumers were interacting with our jewelry displays in our New York and Toronto showrooms, and based on what we saw, we decided to develop more than 30 different forms of custom jewelry displays that would show the customer how the product might be styled. This means we can show our pieces “stacked,” so while someone is shopping they can also decide how they might wear it. This is extremely important to our retail model, which relies on removing the barriers and intimidation around shopping for fine jewelry.
KM: So what fueled the switch from 1-2 day delivery to in-store product fulfillment?
NS: After opening our showrooms and seeing how our customers wanted that instant gratification (especially after spending time getting styled), we realized that giving them the product right there elevates the experience. We also realized that many of our customers are coming for gifts, which is time-sensitive. This, along with the fact that we’re working on increasing the number of retail locations, pushed us towards carrying inventory on-site.
KM: Tell me about how the new in-store fulfillment process works.
NS: We don’t display all of our products in the stores; we curate the collection based on the location and what’s new at Mejuri. We also have all of our stores linked to the same ERP system, which allows all of our data to be centralized.
This enables our team to monitor inventory levels and push replenishment multiple times a week if needed in order to stay nimble and lean at each location. We also allow customers to make returns in-store (even if the purchase happened online) and have the option for customers to pick up online orders placed before 12:00 p.m. in-store the next day.
There is only one display of products on the floor for customers to try on, handle and play around with. However, we do have personal stylists on hand at each studio who can grab pieces from the back to help style one customer’s look while another is browsing so that our experience is individualized and seamless. Once an order has been placed, it’s gift wrapped in the back of the store and brought out to the customer.
KM: What technology behind the scenes helps with inventory forecasting? Or are decisions purely based on in-store purchase data?
NS: Mejuri’s business is a marriage of trend-setting design and trend-validated data. Since we’re introducing new editions every Monday, each new piece impacts the performance of other products. We use in-house planning tools that weigh different indicators of success (such as material, stone color, etc.) when we look at inventory planning. We’ve released more than 1,700 products to date, so we have plenty of feedback to work with, too.
In stores, we carry a curated selection of our products, and we customize this selection based on the location, which is informed by data and our stylists’ feedback. Based on sales from existing products and feedback from our community, we then include this in our design loop. Our design and forecasting teams sit side by side, which means we include both in the thought process. We’re vertically integrated and use technology and 3D printing in our prototyping and production processes, which also helps us shorten the steps needed to go from design to production.
KM: What about demand around the brand’s online business? With more than 100,000 people on the waitlists for Mejuri product, what are you doing to shorten wait times? This clearly illustrates demand, but does it also indicate a supply chain issue?
NS: We manufacture in limited quantities with individually-handcrafted pieces, and this process means we produce in small batches. When we release new editions, they often sell out, exceeding demand expectations. The initial launch means we can then predict demand for the next restock on a product and size level.
We don’t carry a lot of inventory because we built our supply chain in such a way that we’re able to replenish very quickly, so when you’re on a Mejuri waitlist, on average you’re waiting no more than three weeks to receive your product. This short wait period ensures our community is always receiving the highest quality hand-crafted pieces.
KM: How do you plan to react to the increased volume that happens during the busy holiday shopping season while using this model?
NS: We take this into consideration in our inventory planning (as well as our employee staffing) to ensure we have sufficient inventory and seamless service during this busy time.
We’ve also designed our showrooms in a way that allows for flexibility in terms of capacity as well—we’re used to big crowds. Last month we saw 25,000 people at our showroom locations. On a typical weekend in our New York showroom, we get an average of 2,000 people coming through the store.
Based on the day of the week, we both re-merchandise the pieces, move tables, and even re-situate entire lounge areas to create more space for visitors coming in groups, which means we can accommodate different store capacities.
There have been times where the demand exceeded our expectations and we sold out of inventory, so to prepare for that, we’re still giving customers the option to place orders in-store, which we will then ship with express free shipping from our main warehouse.
|
5ebfb3c8107254480fd61cfd38cb8be6 | https://www.forbes.com/sites/kaleighmoore/2020/06/01/with-sales-of-home-furnishings-on-the-rise-retailers-like-article-see-200-growth/ | With Sales Of Home Furnishings On The Rise, Retailers Like Article See 200% Growth | With Sales Of Home Furnishings On The Rise, Retailers Like Article See 200% Growth
April 2020 was home furnishings brand Article's highest revenue month to date. Article
With people continuing to spend more time at home during the COVID-19 pandemic, many are turning a closer eye on their home environments.
Retailers within the home furnishing industry are seeing the impact of this in the form of sales growth: Salesforce data from Q1 shows that digital revenue within the home goods space is up 51%.
What’s more: Searches for outdoor furnishings at CB2’s website are up more than 90% from last year, while sales at Revival Rugs are currently up 50% year-over-year, according to internal sources from both brands.
But these two retailers within the category aren’t unique outliers.
Direct-to-consumer home furnishing brand Article shared that April of 2020 was their highest revenue month to date, with sales up 200% compared to April of 2019.
In addition, Aamir Baig, Article’s Co-Founder and CEO, shared that the brand saw a 2X increase in new and repeat customers that month as well.
A surge in consumer demand and orders placed doesn’t come without challenges, however.
In Article’s case, the business was well-positioned to meet increased demand in part because of a pure-play eCommerce model—which is powered by proprietary software that was developed in-house.
MORE FOR YOULord & Taylor Locks Its Doors For The Last Time, After 195 YearsBed Bath & Beyond Launches First Of Eight New Brands This YearBirkenstock Billionaires Unveiled After German Sandal Maker Agrees To Sell In A $4.7 Billion Deal
This eCommerce model allowed the brand to quickly launch new initiatives (like contactless delivery and furniture bundles) in as little as 24 hours, which helps them stay agile during the COVID-19 crisis.
In-house logistics have also been crucial for Article being able to manage the increased sales volumes.
Launched in January 2019 as part of the brand’s efforts to improve the end-to-end shopping experience for customers, having in-house delivery infrastructure (including 1.2 million square feet of warehouse space throughout the US and in Canada) meant they could maintain fast, safe delivery to customers while other retailers were scrambling to coordinate with external partners.
Through March and April of 2020, Article maintained a one to two-week shipping and delivery speed for customers, while others within the category (like Amazon AMZN ) are operating on a two to eight-week shipping delay while more essential orders are given fulfillment priority.
The question now is: Will this demand for home goods, especially those ordered from direct-to-consumer brands online, continue as physical stores begin to re-open?
If you ask Ricardo Belmar, Senior Director of Global Enterprise Marketing at Infovista, there’s still a high level of concern about health and safety that remains—so it’s too soon to tell.
“The fact is: we’re not at the end of the pandemic yet; we’re still squarely in the middle, and safety issues will remain important in the eyes of many customers and employees for many months to come,” he said.
In Article’s case, they plan to continue with increased safety efforts in the coming weeks.
“As the world opens up a bit, we want to make sure we continue to act responsibly and proceed with the same caution we have over the last few weeks,” Baig said.
“This means maintaining the same health and safety protocols we’ve established for our frontline and HQ teams, and if anything, being more on guard due to higher social activity.”
|
908aa8cbe2bdabfd81b4fa01a7ccaa27 | https://www.forbes.com/sites/kaleighmoore/2020/06/24/as-online-sales-grow-during-covid-19-retailers-like-montce-swim-adapt-and-find-success/?sh=85ae6616d78c | As Online Sales Grow During COVID-19, Retailers Like Montce Swim Adapt And Find Success | As Online Sales Grow During COVID-19, Retailers Like Montce Swim Adapt And Find Success
Montce Swim's retail locations are beginning to re-open this month, but online sales will remain a ... [+] focus. Montce Swim
When Florida native Alexandra Grief started selling custom-made bikinis from her apartment 10 years ago, it was merely a side business and passion project. She bought fabrics and tapped into her local community for help with the sewing work.
But today, that side project has grown into a flourishing business: Her company Montce Swim now has four retail locations (two each on east and west coasts) and products that are sold at 60 stockist partners around the world.
When COVID-19 hit and the brand’s retail stores had to close down, Grief worried sales would take a hit.
But like many other retailers with online and offline presences, she was surprised to discover her audience merely shifted from buying in-store to buying online.
Online sales on the rise
The shift to online buying Montce witnessed speaks to an early trend that data is becoming clear over the few months of the COVID-19 shutdown: Online sales, as a whole, are on the rise—and this consumer behavior may become the new normal.
A Digital Commerce 360 analysis showed that online spending represented 16.2% of total retail sales for the quarter, which marks the second-highest online share for any quarter in history.
MORE FOR YOULord & Taylor Locks Its Doors For The Last Time, After 195 YearsThe Target Dress Challenge Sparks Conversation And SalesGoodwill’s Hidden Gem: A Growing Online Business That's Sold $1 Billion Of Used Stuff
Adobe ADBE data also showed that total online spending in May hit $82.5 billion, which is up a whopping 77% year-over-year. Vivek Pandya, Adobe’s Digital Insights Manager, says that it normally would’ve taken between four to six years to reach this level of online sales had things continued at the pre-coronavirus pace.
“It’s an absolute imperative that brands lean into their digital channels,” said Jason Goldberg, Chief Commerce Strategy Officer at Publicis.
“Almost every retailer that relied on stores for the bulk of their sales is now trying to double down on their ecommerce experiences and/or their curbside pickup experiences.”
Montce is a brand seeing this shift play out in real-time: In May of 2020, they had a 134% increase in year-over-year online traffic, a 13% increase in online conversion rates, and a 5% boost in year-over-year email open rates. While they couldn’t disclose specific sales volumes, they shared that overall their numbers were up.
Adapting to COVID-19
Montce Swim
Grief attributes part of the uptick in online activity Montce is seeing to the brand’s ability to be agile as a smaller, privately-held operation.
Across the spectrum of the brand’s operations, many changes were quickly deployed to adapt to the retail implications of COVID-19:
When it came to marketing, their paid advertising and remarketing budgets went up, as did their focus on the brand’s social media presence. Because Instagram is a huge source of customer acquisition for the brand, they leaned into their influencer and affiliate partnerships there, and rather than organizing in-person photo shoots, they let these partners take a DIY approach to content creation. While its retail stores were closed, Montce repurposed these physical spaces into shipping and order fulfillment centers to handle the increased volume of online orders. They also used this period of store closures to focus on operational improvements that had been put on the backburner: They implemented a new inventory system, and new roles were created for employees with a focus on marketing.
By quickly making these changes, Montce was able to capitalize on and leverage its existing audience of online shoppers that helped not only sustain the operation during a worldwide crisis, but actually grow its sales, too.
Looking ahead
We’re not finished with change in the retail world spurred by COVID-19, but in the case of Montce Swim, they’ll continue to move forward with future plans and a focus on blended retail experiences (both online and off) as their retail stores re-open.
“We’ll build on what works,” Grief said.
From Goldberg’s perspective in the retail industry, this is a smart approach.
“In the short term, the biggest win seems to be omni-channel retailers that have inventory in stores, close to consumers, but who can sell that product online and deliver it curbside,” he said.
|
959b2e2f7b8c5450ecf91113e73da26d | https://www.forbes.com/sites/kalevleetaru/2015/09/15/mapping-the-european-migration-crisis-through-google-searches/ | Google Trends Visualizes The European Migration Crisis | Google Trends Visualizes The European Migration Crisis
Migrants make their way through a field and a railway track after crossing the border between Serbia... [+] and Hungary in Roszke, southern Hungary, Sunday, Sept. 13, 2015. (AP Photo/Muhammed Muheisen)
As the migration crisis in the European Union captivates global headlines, what can web searches tell us about the world’s reaction to the crisis and which countries refugees are most interested in immigrating to? While web searches obviously capture the interests and beliefs of only a fraction of the world’s population, they are nonetheless a powerful real-time lens through which to understand global interests and concerns. Google alone accounts for almost 70% of all web searches worldwide and 90-97% of searches within Europe, so examining the most popular Google searches relating to migration could offer unique insights into the current crisis. In fact, the Google Trends team recently put together a series of charts, lists, and maps visualizing the European migration crisis through the eyes of the world’s Google searches, which the remainder of this article will explore.
Looking first at English-language searches over the last decade, worldwide Google searches for the English terms “migrant” and “refugee” decreased by nearly half from 2004 to August 2015, but have been on the rise since the week of August 17th. Starting September 1st, searches began to increase rapidly through a peak of September 3rd for “migrant” and September 4th for “refugee,” though both terms have steadily decreased in interest in the days since, despite news coverage about the crisis increasing substantially. Searches for “refugee” have steadily outpaced those for “migrant” by a factor of two over the last decade, reflecting the former being the preferred term among English web searchers.
Timeline of Google search interest in "refugee" (red) vs "migrant" (blue) (Credit: Google Trends)
According to Google Trends, the top European countries searching for topics relating to the migration crisis since September 3rd are, in order, Macedonia, Portugal, Lithuania, Ireland, and the United Kingdom. Surprisingly, Germany, Hungary, Austria, Greece, and the other countries at the epicenter of the crisis are not among the top countries searching for information about the crisis, perhaps because they are experiencing it already firsthand.
Differences in the top searches about the crisis by country reflect its dramatically varied impact across the world. Germany, which is one of the countries at the epicenter of the crisis, has among its top searches “How to help with Syrian immigrants in Berlin,” “Where are refugees in Dusseldorf,” “Who is behind the wave of immigrants in Europe,” and “Labor market effects of migration.” The mixture of searches about how to help the immigrants and what their impact may be on domestic employment are telling clues of the societal divide being amplified by the crisis. Within Italy, another country hit hard by the crisis, top searches are “How to respond to immigrants in 2015,” “UNICEF offices in Italy,” and “Migration laws in Italy.” Once again, there is a mixture of questions about how to help the immigrants and questions about legality and how to address the influx.
Russian searches reflect its lack of direct exposure to the crisis, with questions like “What is migration”, “Why are migrants fleeing”, and “Why are migrants traveling to Europe?” Despite its geographic proximity to the crisis, United Kingdom searches are also curiously distant, asking “Why do immigrants come to Europe,” “How many migrants are European countries accepting,” “What is life like for a migrant in Europe,” and “What do Germans think of migrants?” French searchers ask “Why are there so many migrants,” “Why do Syrians leave their country,” and “Why are there an influx of migrants,” but also “How can I assist migrants?” Canadian searchers ask “Why are people migrating to Europe,” “European attitude towards migration in 2015” and “Who is behind the organized migration into Europe?” Japan appears the most interested in the response of other countries, asking “Australian stance on migration,” “Is Saudi Arabia accepting refugees,” “Germany migrant crisis,” and “Why do Middle Eastern people migrate to Europe,” but also “Are there Syrian refugees in Japan?”
In a feature published August 4th, the Google Trends team put together the following sequence of maps that assess the volume of Google searches from internet users within each country of the world about immigrating to the United States, United Kingdom, Canada, Germany, France, Italy, Russia, or Japan. The maps rank all countries from dark blue (high interest) to light blue (low interest) in terms of Google searches from that country about immigrating to the given country. There appear to be substantial differences in the countries interested in immigrating to each of the eight countries below, offering a fascinating view into the most desirable countries as seen through the eyes of different parts of the world. Of course, these maps are based exclusively on Google searches and so vary substantially in the demographics and proportion of the population they represent in each country, but nonetheless offer a fascinating way to assess global interests at scale.
United States
Surprisingly, interest in immigrating to the United States, at least as expressed in Google web searches over the last two years, appears concentrated in Central Africa and the Eastern half of the Middle East, with Afghanistan ranking #20 out of 211 and Iran #54. Eritrea ranks #2, while Canada (#84) and Mexico (#89) are nearly equal. There appears to be little interest in immigrating to the US from Europe or Asia or indeed much of South America.
United Kingdom
Searches for immigrating to the United Kingdom seem to be greatest from Asia, North and South Africa (but not Central Africa) and Australia. Like with the United States, Guyana is the lone outlier in South America, at #13 out of 167. Surprisingly, there appears to be little interest from mainland Europe.
Canada
Interest in immigrating to Canada is greatest in Africa, the Middle East, Asia, and the Northern half of South America. Surprisingly, there appears to be little interest from Europe, despite its close ties with the United Kingdom and the presence of French language regions.
Germany
Germany appears to be most attractive to those from East, West, and South Africa (though not Central or North-Central Africa), the Middle East, and Eastern Europe.
France
Interest in immigrating to France appears greatest from West and Central Africa, the Eastern Middle East, and portions of Asia, as well as Central and Western Europe.
Italy
Italy attracts interest from a more scattered array of countries, including a diagonal slice through Africa, the Eastern half of the Middle East, North-East South America, and South-West Asia, as well as outliers like Mongolia (#8).
Russia
With Russia there is predictably strong interest from former Soviet Republics such as Estonia (#16), Latvia (#8), Kazakhstan (#10) and the other “stans” other than Turkmenistan (#102). Surprisingly, Afghanistan ranks #2, Iran #19, Democratic Republic of the Congo #3, Uganda #6, and Israel #17.
Japan
Interest in migrating to Japan is quite geographically diverse, with searches evenly spread across much of the world, especially the southern hemisphere. Europe shows far less interest in immigrating to Japan, though while Estonia is ranked last at 104 out of 104, Latvia is one of the highest in Europe, at #41, while Niger in Africa ranks #2. Mongolia is ranked #8, South Korea #55, and China last at #104 (though this may be affected by the paucity of Google searches within China).
What does all of this teach us about the migration crisis? Intuitively we might expect each country of the world to internalize the crisis in different ways based on how the crisis is affecting its citizens. For the first time, using massive volumes of real-time searches from across the world, we are actually able to visualize those differences, how they have changed over time, and how the world is reacting to a global crisis in real time.
|
90249b5cc09fe032532836ca1d9c7306 | https://www.forbes.com/sites/kalevleetaru/2015/09/20/bernie-rising-hillary-fading-how-bernie-sanders-is-winning-the-media-war/ | Bernie Rising, Hillary Fading: How Bernie Sanders Is Winning The Media War | Bernie Rising, Hillary Fading: How Bernie Sanders Is Winning The Media War
Bernie Sanders makes a fist while talking on stage during the New Hampshire Democratic Party State... [+] Convention on September 19, 2015 in Manchester, New Hampshire. (Photo by Scott Eisen/Getty Images)
While all eyes have been on Donald Trump’s media dominance and premature reports of his media demise, the story that hasn’t been getting much attention is Hillary’s fading media fortunes in the Democratic race. Much like Donald Trump, Bernie’s entrance fundamentally shifted the media dynamics of the field, capturing one third of all media coverage of Democratic candidates, a position which he has largely held ever since, increasing as high as 40% in recent weeks.
Percentage national television coverage of the Democratic candidates that mentions each (Credit:... [+] Kalev Leetaru)
The timeline above shows the percentage of all mentions of any Democratic candidate from January 1, 2015 to present on national television networks Aljazeera America, Bloomberg, CNBC, CNN, Comedy Central, FOX Business, FOX News, LinkTV, and MSNBC that were of each candidate. For the first four months of this year, Hillary essentially enjoyed the Democratic field to herself, hovering around 90-100% of all mentions. April 30th, when Bernie Sanders officially declared his candidacy, changed all of that, as he suddenly rocketed to 34% of all coverage by the following day. Hillary regained ground over the following weeks until Sanders’ official kickoff rally in Vermont on May 26th, at which point he has stayed between a quarter and a third of all Democratic candidate mentions ever since. One June 19th, with his Las Vegas rally and Bill Maher appearance, Sanders came within striking distance of Clinton’s media monarchy, claiming 47% of all Democratic candidate mentions that day to Clinton’s 49%, and again on July 18th with the Democratic Party Hall of Fame Dinner in Cedar Rapids, Iowa, when he claimed 43% of mentions to Clinton’s 46%. (Though it should be noted that both of these dates represented days of lower-than-usual coverage of the Democratic slate).
Over the last 30 days Sanders accounts for 24.4% of all Democratic mentions versus 2.3% for Martin O’Malley, 0.3% for Jim Webb, and 0.2% for Lincoln Chafee. Over the last 7 days Sanders has enjoyed a surge in coverage, reaching 35.9% of Democratic mentions, while Martin O’Malley has fallen to just 0.9%, Jim Webb to 0.2% and Lincoln Chafee to just 0.1%.
More troubling for Clinton is the focus of the coverage she is getting. The first word cloud below shows the words most frequently mentioned within 10 seconds of her name on national television coverage August 12th. Words like “server,” “mail,” “private,” “department,” “justice,” “email,” “handling,” and “trust” all feature prominently, focusing on her use of a private email server as Secretary of State. Words mentioned most frequently within 10 seconds of Bernie Sanders' name the same day are “poll,” “voters,” “clinton,”, and “trump.” This suggests that coverage of Sanders tends to focus less on him as a standalone candidate and more on his role as a foil for Hillary Clinton and Donald Trump.
Words mentioned most frequently within 10 seconds of mentions of Hillary Clinton on national... [+] television August 12, 2015 (Credit: Kalev Leetaru)
Words mentioned most frequently within 10 seconds of mentions of Bernie Sanders on national... [+] television August 12, 2015 (Credit: Kalev Leetaru)
Fast forwarding a month to September 13th, “mail” and “server” are still among the words most closely associated with Clinton’s candidacy, suggesting she is having difficulty shaking the issue. On the other hand, Sanders’ word cloud shows coverage of the Senator refocusing to emphasize Sanders himself and his policies rather than treating him merely as a foil – the prominent mention of “poll” also reflects his continued rise in national polls.
Words mentioned most frequently within 10 seconds of mentions of Hillary Clinton on national... [+] television September 13, 2015 (Credit: Kalev Leetaru)
Words mentioned most frequently within 10 seconds of mentions of Bernie Sanders on national... [+] television September 13, 2015 (Credit: Kalev Leetaru)
Stay tuned for more data-driven coverage of the 2016 campaign season!
I'd like to thank the Internet Archive's Television News Archive for the data used in the figures in this post.
|
b8cbddaabf8531fd44613a7315a11306 | https://www.forbes.com/sites/kalevleetaru/2015/10/26/mapping-the-global-spread-of-antitank-and-manpads-weapons-through-news-mining/ | Mapping The Global Spread Of Antitank And MANPADS Weapons Through News Mining | Mapping The Global Spread Of Antitank And MANPADS Weapons Through News Mining
A U.S Marine shows New Iraqi Army soldiers how to use a TOW Missile during a joint training for... [+] urban combat. (Photo by Marco Di Lauro/Getty Images)
As Russia waded into the Syrian conflict earlier this month, much of the discussion centered on the role that American-provided antitank weapons may have played in forcing Russia to send in military support to stabilize the failing Assad regime. What can massive analysis of the world’s news coverage tell us about the current state of antitank and man-portable air-defense systems (MANPADS) deployments around the world?
Using my GDELT Project, which monitors, live-translates, and thematically, emotionally, and event codes local news media in almost every country of the world in 65 languages, more than 54,000 articles published from February to October 2015 relating to antitank and MANPADS weapons were compiled and mapped. The map below, created using Google BigQuery and CartoDB, visualizes all of this coverage.
Click on the image below to launch the interactive clickable and zoomable version of the map. Each dot represents a location associated in one or more articles with either weapon system or their derivatives. Clicking on any of the dots will open a popup displaying the first 50 articles relating to such weapons with respect to that location, regardless of the location of the news outlet the article is from. In other words the map presents coverage about each location rather than a map of coverage from each location.
Map of locations mentioned in worldwide news media in 65 languages in context with discussion of... [+] antitank and MANPADS weapons February 19, 2015 to October 26, 2015 as monitored by the GDELT Project (Click on image to open interactive clickable/zoomable map) (Credit: Kalev Leetaru)
Immediately clear is that the bulk of conversation about antitank and MANPADS systems in 2015 has centered on locations in South-Eastern Europe and the Middle East, especially Syria, Iraq, Ukraine and Turkish border region. Turkey has long been a popular smuggling route for weapons entering Syria, and Turkish news coverage is filled with reports of antitank weapons hidden in bushes, found in cars at checkpoints, or scattered among several tons of explosives in a truck. Syria and Iraq, of course, generate countless reports of antitank missiles being stockpiled, unavailable, destroyed, or turning the tide in battles.
Second perhaps only to those two countries, Ukraine is filled with mentions of antitank and MANPADS weapons, from Kyiv to Maiorske, clustered in the Eastern region that has seen heavy fighting. Antitank weapons are also appearing in the Gaza smuggling tunnels and as war remnants in the North Sinai area of Egypt. The South Caucasus region is home to significant mentions of the weapons, from Russian forces in Dagestan to left-over antitank mines in Abkhazia, to a business dispute solved with a grenade launcher in Makhachkala.
Along the Russian border, Estonia, Latvia, Lithuania, Poland, and the Czech Republic are all increasing their stockpiles of antitank weapons. In Greece new details emerge about an arms smuggler’s shipments of antitank weapons, while in Cyprus, mine sweepers attempt to defuse a decades-old legacy of thousands of landmines. Even Brazil makes several appearances, with a new manufacturing partnership with Azerbaijan to produce next-generation antitank missiles, and military exercises to test the new weapons. Not to be left out, China debuts its new helicopters optimized for antitank warfare, while a major Indian defense firm announces its own joint venture with an Israeli company to manufacture new missiles.
Keep in mind that machine translation is far from perfect and that trying to accurately identify the text of each article, ignore breaking news insets, deal with complex site navigations and page layouts, and the myriad other technical complexities that go into monitoring the world’s local news media will mean there will always be a certain level of error in maps like the one above, so you are certain to find mistakenly included articles, but overall the map offers a powerful geographic lens to what the world’s media is saying with respect to each corner of the world.
In a single map it is clear that thus far in 2015 antitank and man-portable air-defense systems have been most heavily deployed in the wars in Syria and Iraq and Ukraine, but also clear is the long legacy of such weapons in Europe. Eastern Europe’s wariness of a newly resurgent Russia can be seen in its stockpiling of antitank weapons, while halfway across the world even Brazil is acquiring and developing new generations of the missiles. Perhaps most striking about this map is that so much of the coverage is found only in local sources in local languages, not major English-language Western newspapers, meaning that to robustly track global arms deployment, development, and trafficking, one must turn to local information.
I would like to thank Google for the use of Google Cloud resources including BigQuery and I would also like to thank CartoDB for the use of their online mapping platform to create the interactive map.
|
ae73966618f4a40727eca4897da3b9e0 | https://www.forbes.com/sites/kalevleetaru/2015/11/25/why-its-so-important-to-understand-whats-in-our-web-archives/ | Why It's So Important To Understand What's In Our Web Archives | Why It's So Important To Understand What's In Our Web Archives
Man opening cabinet door in data center to reveal computing equipment inside. (FABRICE... [+] COFFRINI/AFP/Getty Images)
Last week I explored what precisely makes up the 20 year archive of the web held in the Internet Archive’s Wayback Machine. Several of those findings have spawned considerable discussion over the past week within the library and web archival communities about what it means to archive the web, how much documentation and metadata is enough, the tradeoffs in completeness vs reach, and how to better engage with the myriad constituencies served by web archives.
Why is it so important to understand what’s in our web archives? Perhaps the most important reason is that as an infinite and ever-changing landscape, it is simply impossible to archive the “entire internet” and perfectly preserve every change to every page in existence. Web archives are by their very nature an imperfect record of the web and constructing them is an exercise in countless tradeoffs of how to preserve an infinite stream with finite resources.
At the most basic level there is the question of how to seed the crawlers of an archive and what kinds of websites they should prioritize. Should an archive prioritize government content as a lens onto the information a government provides its citizens, educational websites as the output of the nation’s centers of learning, commercial websites as an indicator of commercial use of the web, or personal websites as a window onto civil society? Should an archive focus its efforts on preserving at least one copy of every page in existence to capture the breath of the web, or should it focus on regular continued snapshots of a smaller set of pages over time to capture the evolution of the web?
To what degree should an archive attempt to preserve the content and experience of dynamic websites such as databases and search engines or interactive and personalized websites? When examining change over time, should only changes to the text of a page be archived or should any change to the template of a page or the selection of advertisements displayed on it count?
There is no single “right” answer to any of these questions. Each of the many constituencies of web archives have their own unique needs that can be at odds with each other. The newspaper division of a national library might be interested in preserving at least one copy of every article published by an online news website in the country. A political communications scholar, on the other hand, might want to track how government press releases are being modified over time or the evolution of a major political blog over many years. The former devotes all crawling activity to finding new links, while the latter requires precise continuous high-density snapshots over decades.
In terms of interface and metadata, an ordinary citizen user might simply want to look up the last available version of a page that is no longer accessible. A scholar, on the other hand, might want to understand why a particular site was crawled more frequently during a particular period and why some highly-linked inner pages of the site are absent from the archive. A lawyer might need to authenticate the precise moment that a page was captured and from where.
Understanding the decisions made by an archive’s crawlers is perhaps the most important obstacle to large-scale scholarly use of web archives. A researcher examining the evolution of digital humanities on the web, for instance, needs to understand whether the archive being examined had collection policies or crawler heuristics that might bias it away from crawling such websites, considerably skewing the underlying sample. Alternatively, a scholar studying online news needs to understand how the Archive handles cookie-mediated metered access and news paywalls and how it traverses news sites and whether those characteristics might skew it towards certain sections of the outlet.
Few web archives today provide such transparency into the operation of their collection policies and technologies. This is problematic in that many studies and publications using collections like the Wayback Machine make assumptions about characteristics like inclusion criteria and recrawl rates. In my own interactions with academic researchers, I have heard it frequently asserted that the Wayback Machine’s recrawl rate can be used as a direct measure of the update speed of a website. However, the Archive has never provided guidance on how they determine their recrawl rate and my findings of last week suggest the rate at which the Archive recrawls pages is not highly correlated to a page’s expected rate of content updates.
At the same time, few scholars have the ability or expertise to study web archives at these scales. Like with other kinds of “big data” such as social media, most researchers tend to extract small samples of data from web archives to derive conclusions from. Such samples are often too small to reveal the kinds of macro-level biases that can only be observed when looking at the dataset as a whole. Without knowing what’s in our archives, we are simply stumbling through the dark.
Given that no web archive will ever be perfect, what is the purpose of studying the limitations and biases in today’s archives? Within the academic community there has been a growing discourse and unease in certain quarters about the lack of visibility into how the datasets we use have been constructed and how those decisions might bias the results we draw from them. For example, a recent paper questioned whether findings derived from the Google Ngrams collection may be heavily skewed towards scientific and medical literature, potentially biasing or invalidating certain results derived from the collection. Without spending the time to understand what makes up the web archives we use, we are doomed to repeat this same exercise with web research.
The web archives of today were never designed for high-accuracy stable-snapshot research on the evolution of the web, offering all the more reason to bolster our understanding of what is in them. Indeed, the “big data” era as a whole has come to be defined by the use of data in novel ways it was never designed for. However, doing so often entails breaking critical assumptions the builders of the datasets had in how they might be used and their expectations of its limitations and the impact of any potential biases. Moreover, certain kinds of biases won’t manifest themselves until datasets are used in certain novel ways, meaning that locating and addressing such biases is an ongoing process.
That is not to say that web archives should support large-scale web research using their data only after their nuances are known. Rather, it suggests that archives should make such analyses a priority, partnering with scholars who specialize in such at-scale data characterization, as few archives have inhouse staff with the kind of exceptionally specialized skillsets and experience to tease out the subtle nuances of multi-petabyte datasets. It also means that they should similarly prioritize publishing available documentation on their collection policies and algorithms and emphasize to scholars the limitations of their tools and interfaces. This could include adjustments to their public interfaces to guide researchers in proper use and assumptions.
For example, the Google Ngrams viewer does not provide an option to view the raw number of books mentioning a keyword by year – instead it reports only the percentage of digitized books from that year containing the keyword. This is to ensure that researchers unfamiliar with normalization do not draw false conclusions from the data. In other words, they designed their interface to ensure that researchers were forced to properly normalize their findings against variations in the universe of books digitized per year. Expert users can still access the raw data files, but with sufficient computing requirements that they are far more likely to understand how to properly normalize their findings.
Offering greater documentation is not an either/or proposition. Archives needn’t halt research using their collections until they have produced exhaustive documentation outlining every nuance of their systems. Just releasing basic statistics on their collections and macro-level collection policies would go a long way towards starting a process of greater transparency. Every archive will have errors and missing data and biases in the snapshot it offers of the web. This is not to say they cannot be used, but rather that those nuances must be better understood so they can be accommodated for and worked around.
Today, however, many archives are opaque black boxes that offer researchers little understanding of their inner workings. Essentially they are giant libraries with no index – you can request a book and if it exists you get it, but you can’t browse or search to know what’s in it and if something is missing you don’t know whether it was a simple technical glitch or whether the archive purposely minimizes its collection of that content.
In an ideal world, archives would provide metadata and internal replay logs documenting the complete operations of its crawler, ingest, processing, and storage infrastructures. For some websites it is crucial to know the IP address of the crawler requesting it and the “micro session” the request occurred within due to IP localization and cookie-mediated personalization or metering. Understanding how the processing layers transform a given page into outbound links for crawling can help pinpoint bugs, while flow data can help diagnose why a given site is being crawled more or less often.
Having personally written web crawling systems for nearly 20 years and overseen web-scale crawling infrastructures for just over 15 years, there are many techniques and approaches that can be used to capture and store such massive-scale instrumentation data extremely efficiently. Collecting such data has considerable operational benefits for the archive itself, helping it rapidly pinpoint evolving issues and better understand and address bugs so that they do not fester over years.
Yet, better transparency doesn’t necessarily entail exhaustive metadata and log files documenting every operation across the entire archive. It can be as simple as open sourcing the source code to the crawlers and orchestrating infrastructure. Open sourcing their crawlers and orchestration tools has the side benefit of allowing web archives to leverage the vast global community of developers and web technology experts who specialize in dealing with the myriad idiosyncrasies and nuances of working with the open web. Instead of their own staff having to add every feature and find every bug, open sourcing its tools allows an archive to leverage the latest algorithms and approaches to dealing with dynamic websites, client-side rendering, page extraction, ill-formed HTML and character encoding, crawling strategies, and the like.
The Internet Archive has been a shining example in this regard, making large portions of its underlying infrastructure available via GitHub for others to build upon and improve. Not all of its tools have been publicly released and there is still scant documentation on how these tools are blended together within the Archive itself, the ingest streams that populate them, and the specific configuration tweaks used by the Archive that would help explain some of the nuances of their holdings. But, by releasing its source code publicly, the Archive has built an open infrastructure that allows others to build upon its work and contribute their own expertise.
Perhaps moving forward, through stronger outreach with the developer and computer science communities, and partnerships with web conferences and developer competitions, the Archive may be able to build an even greater network of forward-looking developers to help it constantly evolve its tools to the ever-changing web landscape.
Indeed, in my presentation at the 2010 Library of Congress summit “Citizen Journalists and Community News: Archiving for Today and Tomorrow” I outlined a vision of web archives working more closely with their communities, with scholars, and with content platforms like WordPress to create strategic data feeds for archival, while also pointing out many of the limitations of current archival practices with respect to research access. In my 2012 opening keynote address to the 2012 International Internet Preservation Consortium at the Library of Congress, I again outlined my own experiences and perspectives as a researcher making use of web archives over many years and the kinds of insight and indicators needed for robust research use. A number of these suggestions were subsequently adopted by the Archive in the form of interface and other changes, offering a case example of what happens when researchers and web archives come together.
In the end, the Internet Archive and its brethren are all we have standing between us and the total and complete loss of our online heritage. They are the only open archives of the dawn of the internet era and as imperfect as they are, they are preserving our collective global heritage in a way that no other organization does, and doing it as a public good without profit or other motivation. There will never be perfect data, certainly not when it comes to archiving the infinite ever-changing landscape of the web, but that doesn’t mean that we cannot come together as a community to help fix the rough edges and try to better understand what the nuances and biases of our collections are so that we can address them. It also doesn’t mean that we can’t make more collaborative decisions that bring together archives and the communities they serve to think about the myriad decisions and tradeoffs that define and shape our archives.
The web is disappearing page by page, character by character, image by image, before our very eyes, even as you read the words on this page. Only by coming together as a community can we ensure the preservation and access of our digital history to future generations.
|
a9bfbfc172cfbbbea8fa2bd4b4a570ea | https://www.forbes.com/sites/kalevleetaru/2016/01/03/the-high-costs-of-hosting-sciences-big-data-the-commercial-cloud-to-the-rescue/ | The High Costs Of Hosting Science's Big Data: The Commercial Cloud To The Rescue? | The High Costs Of Hosting Science's Big Data: The Commercial Cloud To The Rescue?
A display of globes showing world datasets at the Big Bang Data exhibition at Somerset House... [+] highlighting the data explosion that’s radically transforming our lives. (Peter Macdiarmid/Getty Images for Somerset House)
Science Magazine’s first issue of 2016 includes a discussion chronicling how the National Institutes of Health (NIH) is re-exploring how it manages funding for the many biomedical database products it supports. In particular, the NIH National Human Genome Research Institute (NHGRI) is expected to close out its funding of the Online Mendelian Inheritance in Man (OMIM) database, one of the oldest genomic databases that has run continuously for 50 years. What does this mean for the future of scientific big data hosting?
Today the NIH spends more than $110 million a year on its largest 50 databases, excluding those hosted by the National Library of Medicine (NLM). OMIM, supported by NHGRI, costs $2.1 million a year and draws more than 300,000 unique users a month and 23 million page views a year, while the Gene Ontology Consortium draws 36,000 users a month at a cost of $3.7 million a year. Databases like OMIM in particular have become critical standard reference databases used in both research and clinical diagnosis, leaving key questions about how to support such heavily-used resources. One recommendation has been to convert them into paid subscription services, which was the model used for The Arabidopsis Information Resource (TAIR), after NSF ended its funding.
Much has been written about the big data explosion in the biomedical world, especially genomics, which is expected to yield as much as 40 exabytes of data by 2025, outpacing even YouTube’s storage requirements. Why, one might ask, does it really matter then whether a small handful of databases have to switch from subsidized free access to a cost-recovery model, in a world where biomedical data is on a path to consume what some estimate may be 20 times the needs of YouTube in just the next decade?
The answer lies in the question of how we manage the firehose of data emerging from academic research more broadly. Beginning in 2011, the National Science Foundation has required all grantees to “share with other researchers, at no more than incremental cost and within a reasonable time, the primary data, samples, physical collections and other supporting materials created or gathered in the course of work under NSF grants” and to “encourage and facilitate such sharing.” Awards from some NSF directorates require data to be preserved and made widely available for a minimum of three years after publication and in some cases for substantially longer. This raises the question of who pays for this data archival and sharing?
While an increasing number of academic institutions offer centralized institutional repository systems, few are designed or capable of handing the kinds of massive multi-terabyte datasets being generated by the new era of “big data” research. In fact, in my own personal experience of using NSF-supported supercomputing resources over more than a decade, storage was the single most difficult resource to secure. Thousands or tens of thousands of processors could be secured readily and with minimal delay, but requesting the equivalent of just a few terabytes of disk was an enormous undertaking, and making such content globally available to other researchers over high-speed networks was extraordinarily difficult.
Much of today’s academic High Performance Computing (HPC) infrastructure was built for the era of computation-intensive scientific simulation and modeling, which placed an emphasis on computational capacity, rather than storage, IO capabilities, and commodity network access. While this is slowly changing, storage is still one of the most precious resources in the academic environment and the hardware and software environments are rarely optimized for the kinds of “big data” needs of the new era of research.
On the other hand, the commercial cloud vendors like Google Cloud Platform, Amazon Web Services, and their numerous brethren, are custom designed for the “big data” era. Single datasets ranging into the multiple petabytes or with tens of trillions of records can be analyzed in near-realtime with systems specially built for this class of research.
Economies of scale mean the companies are able to offer environments and pricing difficult to match in the academic environment, with datasets mirrored across the world and with direct connections to internet backbones. The latest Google Cloud Storage pricing lists a cost of just $26 for one terabyte per month, while Amazon’s S3 platform is around $30/month, both of which include the full costs of RAID redundancy, power, cooling, facilities space, backups, hardware maintenance, and 24/7 system administration by a dedicated team of engineers.
When it comes to analyzing all that data, researchers can instantly spin up dedicated clusters tailored and purpose-built for a specific task, using them only for the exact time needed, and performing analyses on-demand, rather than waiting in a traditional batch queue for hours or even days at a time. Datasets can be shared with the world using the same internet backbone connections that power Google and Amazon, allowing datasets to be streamed in realtime at nearly linear scaling, anywhere in the world.
Moreover, both Google and Amazon offer specialized services for genomics research, with a complete human genome costing just $3-5 a month to host. In fact, in a nod to the tremendous potential of the commercial cloud for biomedical research, NIH recently collaborated with both Google and Amazon to house copies of the 1000 Genomes Project in their respective clouds free of charge. Commercial clouds are vastly more secure than most university computing networks, carry the necessary certifications like HIPPA, and offer nearly limitless scaling. As the Alzheimer's Disease Sequencing Project leader put it, “On the local university server it might take months to run a computationally-intense [analysis] … On Amazon it's, 'how fast do you need it done?', and they do it.”
When it comes to long-term preservation and ensuring that these datasets remain available years and decades into the future, one could imagine a role for the Internet Archive, which today preserves more than 20 petabytes of the web, television, books, music, imagery, and software, having archived and preserved for posterity the open web for almost two decades.
In the end, as the academic enterprise moves towards a future ever more entwined with the world of big data, it faces new challenges in supporting the contemporary needs and long term preservation of data intensive research and offers a powerful new application area for the cloud.
|
daf8b70ff805d5ccd0a3750517cf2f80 | https://www.forbes.com/sites/kalevleetaru/2016/01/17/policing-meets-big-data-a-lesson-in-sentiment-mining-data-recency-and-dashboards/ | Policing Meets Big Data: A Lesson In Sentiment Mining, Data Recency And Dashboards | Policing Meets Big Data: A Lesson In Sentiment Mining, Data Recency And Dashboards
Fresno Police Chief Jerry Dyer speaks during a news conference in 2010. (AP Photo/Paul Sakuma)
The Washington Post carried a story earlier this week about a new experiment in data-driven policing being run by the Fresno Police Department. While algorithmic and predictive policing has become a hot topic of late, the Post article touched on three areas of particular relevance to the big data world: data recency, the burgeoning use of sentiment mining and dashboard/interface design.
Almost 90% of all local police departments in the US now employ some form of surveillance technology from “cameras and automated license plate readers [to] handheld biometric scanners, social media monitoring software, devices that collect cellphone data and drones.” Yet, with this much data pouring into central archives that pool it all together, police departments are running up against traditional warehouse issues like data recency.
The Post article describes how when police ran local councilman Clinton J. Olivier through the software at his request, the system estimated his threat level as green, but his residence was scored as a yellow. Police suggested the score was likely caused by a previous resident who had lived at the address at some point in the past and may have had a police interaction or record. Yet, as Olivier pointed out “even though it’s not me that’s the yellow guy, your officers are going to treat whoever comes out of that house in his boxer shorts as the yellow guy.”
In other words, if police are ever dispatched to an incident at his address, they won’t know that the person coming out of the front door is Olivier and therefore has a green score, they will only know that the address has a yellow score and associate whoever walks out of the door as a potential threat.
There is not enough detail in the article to understand why Olivier’s address was given a yellow score and the company apparently declined to comment on the specific data points that triggered the score, but one likely reason is data recency. If, as the police suggested to Olivier, the score was due to a past resident of his house, it is likely that the database was not properly merging in real estate sale data and USPS change of address data or other feeds that would denote a change in residence.
As with any warehouse, data from different sources often arrives at different rates and with different data quality issues that can affect the ability to merge it together. In this case it is possible that the database has historical arrest and dispatch records, but not the necessary change of address records to recognize that the subject of the previous police interactions at that address no longer resides there. Alternatively, it is also possible that the database is aware that the previous subject has moved, but is designed to still flag the address for a period of time to provide situational awareness for officers in case the person returns for some reason.
The challenge is that by resolving all of this information down to a single red/yellow/green score, all of the data points, interpretations and decisions that led up to the score are lost. It is therefore impossible to know whether an incorrect score is due to an out-of-date database, whether it was by design, or whether it is due to other past adverse information available about the address.
Yet, it is the tool’s use of sentiment analysis of social media posts that poses perhaps the greatest risk of potentially dangerous errors. As the company puts it “to the extent that there is information that is in the public domain, regardless of where the input was derived, it could potentially be surfaced” and the company specifically touts its ability to scour social media as part of its risk scoring system.
However, the local ABC affiliate in Fresno reported on “a Fresno woman whose score went up for posting on Twitter about her card game that happens to have ‘rage’ in its title.” Sentiment mining is an incredibly complex and nuanced field, with the majority of current systems coming from the computer science world, rather than the field’s roots in psychology and communications. As I chronicled for Wired in 2014, laying out the state of the field and the major stumbling blocks of most commercial and research systems, assessing emotional tenor from text is an extremely difficult and error-prone task and the majority of current systems have very significant limitations to their accuracy.
Sarcasm and idiomatic expressions are particularly troublesome, even for human analysts. In 2014 the Secret Service actually put out a call for proposals specifically highlighting the requirement that proposed systems correctly handle sarcastic statements, while the FBI maintained at the time a detailed social media guide that included definitions of more than 3,000 acronyms to help its analysts make sense of online conversation.
In 2012 two British tourists were detained for 12 hours and interrogated by Homeland Security counter-terrorism officials and ultimately denied entry to the US over two tweets, one of which referenced “destroy[ing] America” and the other that he would be “diggin’ Marilyn Monroe up.” It turned out that the DHS analysts who saw the tweets were unfamiliar with pop culture and did not recognize the British colloquial use of “destroy” as a euphemism for partying or that digging up Marilyn Monroe was a quote from an episode of the American comedy show Family Guy. In fact, the DHS paperwork states “Mr. Bryan confirmed that he had posted on his Tweeter [sic] website account that he was coming to the United States to dig up the grave of Marilyn Monroe. Also … that he was coming to destroy America.”
Social media is also rife with exasperated commentary posted in moments of anger. A British traveler in 2010 became the first Briton convicted of a crime on Twitter when, in a fit of anger after having his flight canceled tweeted that he would be “blowing the airport sky high!” While the courts agreed that given his background and lack of any preparative activity, it was highly unlikely that he actually intended to carry out that threat, it was nevertheless a criminal act.
If human intelligence analysts are unable to recognize sarcasm and distinguish between a British slang term for partying and an actual threat to attack the United States or recognize a line from an American comedy show, and when people routinely tweet very realistic threats in moments of anger, what hope do machines have and how should those be factored into social media-based threat scores?
As the Secret Service RFP reflects, sentiment analysis, whether powered by humans or people, has great difficulty understanding intent at the level of an individual when it comes to potential future violence. Few contemporary computer algorithms would have recognized “they take 1 of ours, let’s take 2 of theirs” or “putting wings on pigs” as statements of imminent violence against a police officer, while the hashtag “#shootthepolice” would likely also have been missed by many platforms due to the way in which hashtags have been treated by most sentiment analysis tools.
Putting all of this together, the challenges raised in the Post article and these other cases come not from the data itself, but rather from how that data is being interpreted. When black box algorithms synthesize large amounts of highly diverse data and incorporate very complex and nuanced algorithms like sentiment analysis, but output only a simple score like red/yellow/green, it masks all of that complexity and could lead to an officer feeling overly confident about a risk assessment. Instead, a better interface might be to display to the officer a handful of the data points judged by the algorithm to be most significant, such as the tweets or arrest records in question. The human officer would likely instantly recognize that a “rage” tweet about a card game is a false positive, while an outdated arrest record for a property that was just sold would suggest to an officer that the risk flag on a given house is no longer relevant. Moreover, by giving officers in the field, who are most familiar with local terminology, street slang and gang code of their area, direct access to information, they can likely make better determinations of risk than a generic algorithm applying blind generalized probabilities.
In the end, the issues faced by Fresno’s data-driven policing appear to be a classic issue of user interface design and a dashboard modality that masks the uncertainty of the underlying data. By eliminating the black box scoring component and instead providing officers a concise actionable summary of key data points identified by the software, the database’s realtime indicators can be coupled with the street knowledge and human intelligence of officers to yield the kind of intelligent policing envisioned by the Fresno Police Department with a much lower risk of false positives. Indeed, it appears that is precisely what the city intends to do.
|
a6934afa5a54378891394e3f1016f7f9 | https://www.forbes.com/sites/kalevleetaru/2016/01/18/the-internet-archive-turns-20-a-behind-the-scenes-look-at-archiving-the-web/ | The Internet Archive Turns 20: A Behind The Scenes Look At Archiving The Web | The Internet Archive Turns 20: A Behind The Scenes Look At Archiving The Web
Internet Archive founder Brewster Kahle and some of the Archive's servers in 2006. (AP Photo/Ben... [+] Margot)
To most of the web surfing public, the Internet Archive’s Wayback Machine is the face of the Archive’s web archiving activities. Via a simple interface, anyone can type in a URL and see how it has changed over the last 20 years. Yet, behind that simple search box lies an exquisitely complex assemblage of datasets and partners that make possible the Archive’s vast repository of the web. How does the Archive really work, what does its crawl workflow look like, how does it handle issues like robots.txt, and what can all of this teach us about the future of web archiving?
Perhaps the first and most important detail to understand about the Internet Archive’s web crawling activities is that it operates far more like a traditional library archive than a modern commercial search engine. Most large web crawling operations today operate vast farms of standardized crawlers all operating in unison, sharing a common set of rules and behaviors. They traditionally operate in continuous crawling mode, in which the goal is to scour the web 24/7/365 and attempt to identify and ingest every available URL.
In contrast, the Internet Archive is comprised of a myriad independent datasets, feeds and crawls, each of which has very different characteristics and rules governing its construction, with some run by the Archive and others by its many partners and contributors. In the place of a single standardized continuous crawl with stable criteria and algorithms, there is a vibrant collage of inputs that all feed into the Archive’s sum holdings. As Mark Graham, Director of the Wayback Machine put in an email, the Internet Archive’s web materials are comprised of “many different collections driven by many organizations that have different approaches to crawling.” At the time of this writing, the primary web holdings of the Archive total more than 4.1 million items across 7,357 distinct collections, while its Archive-It program has over 440 partner organizations overseeing specific targeted collections. Contributors range from middle school students in Battle Ground, WA to the National Library of France.
Those 4.1 million items comprise a treasure trove covering nearly every imaginable topic and data type. There are crawls contributed by the Sloan Foundation and Alexa, crawls run by IA on behalf of NARA and the Internet Memory Foundation, mirrors of Common Crawl and even DNS inventories containing more than 2.5 billion records from 2013. Many specialty archives preserve the final snapshots of now-defunct online communities like GeoCities and Wretch. Dedicated Archive-It crawls preserve a myriad hand-selected or sponsored websites on an ongoing basis such as the Wake Forest University Archives. These dedicated Archive-IT crawls can be accessed directly and in some cases appear to feed into the Wayback Machine, accounting for why the Wake Forest site is captured almost every Thursday and Friday over the last two years like clockwork.
Alexa Internet has been a major source of the Archive’s regular crawl data since 1996, with the Archive’s FAQ page stating “much of our archived web data comes from our own crawls or from Alexa Internet's crawls … Internet Archive's crawls tend to find sites that are well linked from other sites … Alexa Internet uses its own methods to discover sites to crawl. It may be helpful to install the free Alexa toolbar and visit the site you want crawled to make sure they know about it.”
Another prominent source is the Archive’s “Worldwide Web Crawls,” which are described as “Since September 10th, 2010, the Internet Archive has been running Worldwide Web Crawls of the global web, capturing web elements, pages, sites and parts of sites. Each Worldwide Web Crawl was initiated from one or more lists of URLs that are known as ‘Seed Lists’ … various rules are also applied to the logic of each crawl. Those rules define things like the depth the crawler will try to reach for each host (website) it finds.” With respect to how frequently the Archive crawls each site, the only available insight is “For the most part a given host will only be captured once per Worldwide Web Crawl, however it might be captured more frequently (e.g. once per hour for various news sites) via other crawls.”
The most recent crawl appears to be Wide Crawl Number 13, created on January 9, 2015 and running through present. Few details are available regarding the crawls, though the March 2011 crawl (Wide 2) states it ran from March 9, 2011 to December 23, 2011, capturing 2.7 billion snapshots of 2.3 billion unique URLs from a total of 29 million unique websites. The documentation notes that it used the Alexa Top 1 Million ranking as its seed list and excluded sites with robots.txt directives. As a warning for researchers, the collection notes “We also included repeated crawls of some Argentinian government sites, so looking at results by country will be somewhat skewed.”
Augmenting these efforts, the Archive’s No More 404 program provides live feeds from the GDELT Project, Wikipedia and WordPress. The GDELT Project provides a daily list of all URLs of online news coverage it monitors around the world, which the Archive then crawls and archives, vastly expanding the Archive’s reach into the non-Western world. The Wikipedia feed monitors the “[W]ikipedia IRC channel for updated article[s], extracts newly added citations, and feed[s] those URLs for crawling,” while the WordPress feed scans “WordPress's official blog update stream, and schedules each permalink URL of new post for crawling.” These greatly expand the Archive’s holdings of news and other material relating to current events.
Some crawls are designed to make a single one-time capture to ensure that at least one copy of everything on a given site is preserved, while others are designed to intensively recrawl a small subset of hand-selected sites on a regular interval to ensure both that new content is found and that all previously-identified content is checked for any changes and freshly archived. In terms of how frequently the Archive recrawls a given site Mr. Graham wrote that “it is a function of the hows, whats and whys of our crawls. The Internet Archive does not crawl all sites equally nor is our crawl frequency strictly a function of how popular a site is.” He goes on to caution “I would expect any researcher would be remiss to not take the fluid nature of the web, and the crawls of the [Internet Archive], into consideration” with respect to interpreting the highly variable nature of the Archive’s recrawl rate.
Though it acts as the general public’s primary gateway to the Archive’s web materials, the Wayback Machine is merely a public interface to a limited subset of all these holdings. Only a portion of what the Archive crawls or receives from external organizations and partners is made available in the Wayback Machine, though as Mr. Graham noted there is at present “no master flowchart of the source of captures that are available via the Wayback Machine” so it is difficult to know what percent of the holdings above can be found through the Wayback Machine’s public interface. Moreover, large portions of the Archive’s holdings carry notices that access to them is restricted, often due to embargos, license agreements, or other processes and policies of the Archive.
In this way, the Archive is essentially a massive global collage of crawls and datasets, some conducted by the Archive itself, others contributed by partners. Some focus on the open web, some focus on the foundations of the web’s infrastructure, and others focus on very narrow slices of the web as defined by contributing sponsors or Archive staff. Some are obtained through donations, some through targeted acquisitions, and others compiled by the Archive itself, much in the way a traditional paper archive operates. Indeed, the Archive is even more similar to traditional archives in its use of a dark archive in which only a portion of its holdings are publically accessible, with the rest having various access restrictions and documentation ranging from detailed descriptions to simple item placeholders.
This is in marked contrast to the description that is often portrayed of the Archive by outsiders as a traditional centralized continuous crawl infrastructure, with a large farm of standardized crawlers ingesting the open web and feeding the Wayback Machine akin to what a traditional commercial search engine might do. The Archive has essentially taken the traditional model of a library archive and brought it into the digital era, rather than take the model of a search engine and add a preservation component to it.
There are likely many reasons for this architectural decision. It is certainly not the difficulty of building such systems – there are numerous open source infrastructures and technologies that make it highly tractable to build continuous web-scale crawlers given the amount of hardware available to the Archive. Indeed, I myself have been building global web scale crawling systems since 1995 and while still a senior in high school in 2000 launched a whole-of-web continuous crawling system with sideband recrawlers and an array of realtime content analysis and web mining algorithms running at the NSF-supported supercomputing center NCSA.
Why then has the Archive employed such a patchwork approach to web archival, rather than the established centralized and standardized model of its commercial peers? Part of this may go back to the Archive’s roots. When the Internet Archive was first formed Alexa Internet was the primary source of its collections, donating its daily open crawl data. The Archive therefore had little need to run its own whole-of-web crawls, since it had a large commercial partner providing it such a feed. It could instead focus on supplementing that general feed with specialized crawls focusing on particular verticals and partner with other crawling organizations to mirror their archives.
From the chronology of datasets that make up its web holdings, the Archive appears to have evolved in this way as a central repository and custodian of web data, taking on the role of archivist and curator, rather than trying to build its own centralized continuous crawl of the entire web. Over time it appears to have taken on an ever-expanding collection role of its own, running its own general purpose web-scale crawls and bolstering them with a rapidly growing assortment of specialized crawls.
With all of this data pouring in from across the world, a key question is how the Internet Archive deals with exclusions, especially the ubiquitous “robots.txt” crawler exclusion protocol.
The Internet Archive’s Archive-It program appears to strictly enforce robots.txt files, requiring special permission for a given crawl to ignore them: “By default, the Archive-It crawler honors and respects all robots.txt exclusion requests. On a case by case basis institutions can set up rules to ignore robots.txt blocks for specific sites, but this is not available in Archive-It accounts by default. If you think you may need to ignore robots.txt for a site, please contact the Archive-It team for more information or to enable this feature for your account.”
In contrast, the Library of Congress uses a strict opt-in process and “notifies each site that we would like to include in the archive (with the exception of government websites), prior to archiving. In some cases, the e-mail asks permission to archive or to provide off-site access to researchers.” The Library uses the Internet Archive to perform its crawling and ignores robots.txt for those crawls: “The Library of Congress has contracted with the Internet Archive to collect content from websites at regular intervals … the Internet Archive uses the Heritrix crawler to collect websites on behalf of the Library of Congress. Our crawler is instructed to bypass robots.txt in order to obtain the most complete and accurate representation of websites such as yours.” In this case, the Library views the written archival permission as taking precedent over robots.txt directives: “The Library notifies site owners before crawling which means we generally ignore robots.txt exclusions.”
The British Library appears to ignore robots.txt in order to preserve page rendering elements and for selected content deemed culturally important, stating “Do you respect robots.txt? As a rule, yes: we do follow the robots exclusion protocol. However, in certain circumstances we may choose to overrule robots.txt. For instance: if content is necessary to render a page (e.g. Javascript, CSS) or content is deemed of curatorial value and falls within the bounds of the Legal Deposit Libraries Act 2003.”
Similarly, the National Library of France states “In accordance with the Heritage Code (art L132-2-1), the BnF is authorized to disregard the robot exclusion protocol, also called robots.txt. … To accomplish its legal deposit mission, the BnF can choose to collect some of the files covered by robots.txt when they are needed to reconstruct the original form of the website (particularly in the case of image or style sheet files). This non-compliance with robots.txt does not conflict with the protection of private correspondence guaranteed by law, because all data made available on the Internet are considered to be public, whether they are or are not filtered by robots.txt.”
The Internet Archive’s general approach to handling robots.txt exclusions on the open web appears to have evolved over time. The first available snapshot of the Archive’s FAQ, dating to October 4, 2002, states “The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents of persons who do not want their materials in the collection. By placing a simple robots.txt file on your Web server, you can exclude your site from being crawled as well as exclude any historical pages from the Wayback Machine.” This statement is preserved without modification for the next decade, through at least April 2nd, 2013. A few weeks later on April 20th, 2013, the text had been rewritten to state “You can exclude your site from display in the Wayback Machine by placing a simple robots.txt file on your Web server.” The new language removed the statement “you can exclude your site from being crawled” and replaced it with “you can exclude your site from display.” Indeed, this new language has carried through to present.
From its very first snapshot of October 4, 2002 through sometime the week of November 8th, 2015 the FAQ further stated “Alexa Internet, the company that crawls the web for the Internet Archive, does respect robots.txt instructions, and even does so retroactively. If a web site owner decides he / she prefers not to have a web crawler visiting his / her files and sets up robots.txt on the site, the Alexa crawlers will stop visiting those files and will make unavailable all files previously gathered from that site. This means that sometimes, while using the Internet Archive Wayback Machine, you may find a site that is unavailable due to robots.txt.”
Yet, just a few days later on November 14th, 2015 the FAQ had been revised to state only “Such sites may have been excluded from the Wayback Machine due to a robots.txt file on the site or at a site owner’s direct request. The Internet Archive strives to follow the Oakland Archive Policy for Managing Removal Requests And Preserving Archival Integrity.” The current FAQ points to an archived copy of the Oakland Archive Policy from December 2002 that states “To remove a site from the Wayback Machine, place a robots.txt file at the top level of your site … It will tell the Internet Archive's crawler not to crawl your site in the future” and notes that “ia_archiver” is the proper user agent to exclude the Archive’s crawlers from accessing a site.
The Archive’s evolving stance with respect to robots.txt files appears to explain why attempting to access the Washington Post through the Wayback Machine yields an error that it has been blocked due to robots.txt, yet the site is being crawled and preserved by the Internet Archive every few days over the last four years. Similarly, accessing USA Today or the Bangkok Post through the Wayback Machine yields the error message “This URL has been excluded from the Wayback Machine,” but happily both sites are being preserved through regular snapshots. Here the robots.txt exclusion appears to be used only to govern display in the Wayback Machine’s public interface, with excluded sites continuing to be crawled and preserved in Archive’s dark archive for posterity to ensure they are not lost.
Despite having several programs dedicated to crawling online news, including both International News Crawls and a special “high-value news sites” collection, not all news sites are equally represented in the Archive’s stand-alone archives, whether or not they have robots.txt exclusions. The Washington Post has over 303 snapshots in its archive, while the New York Times has 124 and the Daily Mail has 196. Yet, Der Spiegel has just 34 captures in its stand-alone archive from 2012 to 2014, with none since. Just two of the five national newspapers of Japan have such archives, Asahi Shimbun (just 64 snapshots since 2012), Nihon Keizai Shimbun (just 22 snapshots since 2012), while the other three have no such archives: Mainichi Shimbun, Sankei Shimbun, and Yomiuri Shimbun. In India, of the top three newspapers by circulation as of 2013, The Times of India had just 32 snapshots since 2012, The Hindu does not have its own archive, and the Hindustan Times had 250 snapshots since 2012. Of the top three newspapers, one is not present at all and The Times of India has nearly 8 times fewer snapshots than the Hindustan Times, despite having 2.5 times the circulation in 2013.
Each of these newspapers is likely to be captured through any one of the Archive’s many other crawls and feeds, but the lack of standalone dedicated collections for these papers and the apparent Western bias in the existence of such standalone archives suggests further community input may be required. Indeed, it appears that a number of the Archive’s dedicated site archives are driven by their Alexa Top 1 Million rankings.
Why is it important to understand how web archives work? As I pointed out this past November, there has been very little information published in public forums documenting precisely how our major web archives work and what feeds into them. As the Internet Archive and its peers begin to expand their support of researcher use of their collections, it is critically important that we understand how precisely these archives have been built and the implications of those decisions and their biases for the findings we are ultimately able to derive. Moreover, given how fast the web is disappearing before our eyes, having greater transparency and community input into our web archives will help ensure that they are not overly biased towards the English-speaking Western world and are able to capture the web’s most vulnerable materials.
Greater insight is not an all-or-none proposition of having petabytes of crawler log files or no information at all. It is not necessary to have access to a log of every single action taken by any of the Archive’s crawlers in its history. Yet, it is also the case that simply treating archives as black boxes without the slightest understanding of how they were constructed and basing our findings on those hidden biases is no longer feasible as the scholarly world of data analysis grows up and matures. As web archives transition from being simple “as-is” preservation and retrieval sites towards being our only records of society’s online existence and powering an ever-growing fraction of scholarly research, we need to at least understand how they function at a high level and what data sources they draw from.
Putting this all together, what can we learn from these findings? Perhaps most importantly, we have seen that the Internet Archive operates far more like a traditional library archive than a modern commercial search engine. Rather than a single centralized and standardized continuous crawling farm, the Archive’s holdings are comprised of millions of files in thousands of collections from hundreds of partners, all woven together into a rich collage which the Archive preserves as custodian and curator. The Wayback Machine is seen to be merely a public interface to an unknown fraction of these holdings, with the Archive’s real treasure trove of millions of web materials being scattered across its traditional item collections. From the standpoint of scholarly research use of the Archive, the patchwork composition of its web holdings and vast and incredibly diverse landscape of inputs presents unique challenges that have not been adequately addressed or discussed. At the same time, those fearful that robots.txt exclusions are leading to whole swaths of the web being lost can breathe a bit easier given the Archive’s evolving treatment of them, which appears to be in line with an industry-wide movement towards ignoring exclusions when it comes to archival.
In the end, as the Internet Archive turns 20 this year, its evolution over the last two decades offers a fascinating look back at how the web itself has evolved, from its changing views on robots.txt to its growing transition from custodian to curator to collector. Along the way we get an incredible glimpse at just how hard it really is to try and archive the whole web for perpetuity and the tireless work of the Archive to build one of the Internet’s most unique collections.
|
9722dafd64347f598792a924df4056bf | https://www.forbes.com/sites/kalevleetaru/2016/05/09/is-facebook-censoring-conservative-news-how-social-media-controls-what-we-see/ | Is Facebook Censoring Conservative News? How Social Media Controls What We See | Is Facebook Censoring Conservative News? How Social Media Controls What We See
Mark Zuckerberg at the Mobile World Congress walking by audience members immersed in virtual reality... [+] and entirely oblivious to him walking beside them. (Image via Facebook)
Gizmodo’s Michael Nunez is out today with a sensational story in which former Facebook employees claim they regularly censored the platform’s “trending” news section to eliminate stories about conservative topics that were organically trending, blacklisted certain news outlets from appearing and artificially “injected” stories they felt were important but that the site’s users were not discussing or clicking on. This comes a month after Nunez published a leaked internal Facebook poll that asked “What responsibility does Facebook have to help prevent President Trump in 2017?” In short, as the curtain has been lifted on Facebook’s magical trending algorithm, the mythical unbiased algorithm powering what users see on the site is seen to be less machine and more biased human curator. Yet, given Facebook’s phenomenal reach across the world and the role it increasingly plays as primary news gateway for more and more people, the notion that it is systematically curating what its users see in an unalgorithmic and partisan way raises alarm bells on the future of how we access and consume information.
Ryan Merkley, CEO of Creative Commons wrote in Wired last month that “If the Web has achieved anything, it’s that it’s eliminated the need for gatekeepers, and allowed creators—all of us—to engage directly without intermediaries, and to be accountable directly to each other.” Yet, such a rosily optimistic view of the web’s impact on society seems to ignore the mounting evidence that the web is in fact merely coalescing around a new set of gatekeepers. As Jack Mirkinson wrote for Salon earlier this month, “the internet, that supposed smasher of gates and leveler of playing fields, has coalesced around a mere handful of mega-giants in the space of just a couple of decades. The gates didn’t really come down. The identities of the gatekeepers just changed. Google, Facebook, Apple, Amazon: How many people can really say that some portion of every day of their lives isn’t mediated by at least one of these companies? ... It seems that, at least for the moment, we are destined to live in the world that they create—and that includes everyone in the media business.”
Far from democratizing how we access the world’s information, the web has in fact narrowed those information sources. Much as large national chains and globalization have replaced the local mom-and-pop shop with the megastore and local craftsmanship with assembly line production, the internet is centralizing information access from a myriad websites and local newspapers and radio/television shows to single behemoth social platforms that wield universal global control over what we consume.
Indeed, social media platforms appear to increasingly view themselves no longer as neural publishing platforms but rather as active mediators and curators of what we see. This extends even to new services like messaging. David Marcus, Facebook’s Vice President of Messaging recently told Wired: “Unlike email where there is no one safeguarding the quality and the quantity of the stuff you receive, we’re here in the middle to protect the quality and integrity of your messages and to ensure that you’re not going to get a lot of stuff you don’t want.” In short, Facebook wants to act as an intelligent filter onto what we see of the world. The problem is that any filter by design must emphasize some content and views at the expense of others.
In the case of Facebook, the new revelations are most concerning because they go to the very heart of how these new social platforms shape what we understand about the world. It is one thing for a platform to announce it will delete posts that promote terrorism or that threaten another user with bodily harm, but to silently and systematically filter what users see through a distinct partisan lens, especially with regards to news reporting, adds a frightening dimension to just how much power a handful of Silicon Valley companies now wield over what we see online.
Ben Rhodes, deputy national security advisor for strategic communication at the White House recently raised eyebrows when he remarked on the Internet’s impact on news reporting by saying “All these newspapers used to have foreign bureaus. Now they don’t. They call us to explain to them what’s happening in Moscow and Cairo. Most of the outlets are reporting on world events from Washington. The average reporter we talk to is 27 years old, and their only reporting experience consists of being around political campaigns. That’s a sea change. They literally know nothing.” In the interview he went on to claim that the White House is able to use social media to fill that information gap, putting its own talking points and interpretations out on social media which he claims are then mindlessly parroted by the media. What happens when Facebook itself goes further and helps promote some of these viewpoints to its users while censoring others?
The notion that a social media platform would systematically censor particular viewpoints or news has unique import in a presidential election year. As The Hill put it, “Facebook is a key outreach, recruiting and advertising tool for presidential candidates, and it is a primary distribution hub for the political news media. It is also where much of the political debate between voters is taking place,” accounting for over 650 million interactions regarding political candidates in a single month this year. The notion that Facebook might be systematically altering what its users see to promote particular views is troubling at best.
In light of the internal Facebook poll showing its own employees asking how they could help defeat Trump, the fact that Facebook is allegedly systematically censoring conservative news illustrates the potential of the platform to influence elections. Last year a study in the Proceedings of the National Academy of Sciences demonstrated that adjustments to the algorithms search engines use to present information on candidates could potentially throw an election. Yet, the study was merely a hypothetical “what if” since it was not believed at the time that any major platform was systematically adjusting its results in partisan fashion.
If it is true that Facebook has been systematically manipulating its trending feed to remove conservative stories, it raises the question of whether Facebook could materially influence the upcoming election by skewing what the American public sees about the candidates. Could the platform entirely block all news about Trump or only show negative stories about Trump and positive stories about Clinton and Sanders?
From the standpoint of social media analytics, the Gizmodo article goes further to note that Facebook employees interviewed for the story claimed that they also regularly “injected” stories into the trending feed that were not actually being discussed by users at the time and that by placing them into the trending box they gained sufficient visibility that users began to discuss them. In short, Facebook’s news curators decided what they thought should be trending and by virtue of Facebook’s reach, made them trending and removed stories that were actually trending.
In particular, the story notes that the Black Lives Matter movement was not receiving sufficient discussion on Facebook and so the editors prominently featured it, leaping it into mainstream discussion. This is noteworthy in that many media reports of the group note that it came into widespread attention via Facebook. While much is unknown, this raises the intriguing possibility that Facebook’s human editors could have for all intents and purposes launched a new social movement into existence. Could they do the same in other countries and for other causes?
Also mentioned is that Syria had fallen out of interest and that editors had moved to prioritize articles about the conflict to bring it back into public discussion. This has tremendous ramifications for the aid and development communities in that human editors at Facebook are wielding increasing control over the conflicts and crises people are aware of and the side of them they see, with potential impacts on donations and assistance. The article also notes that stories about Facebook itself were banned from the trending feed without high level managerial approval, ensuring that negative news about platform does not receive widespread attention. Indeed, despite receiving widespread media coverage, Gizmodo's story has not appeared in my own trending feed all morning.
In short, it calls into question whether all of the social media analytics reports about what topics are “trending” with Facebook users or the topics engaging the most with particular demographics and populations are the result of actual organic conversation or rather the results of highly biased manual filtering and control over what Facebook as a company wants the world to see. For news agencies thinking of eliminating their websites and handing over distribution of their content entirely to Facebook it presents reason for pause, while for the development community it means Facebook's largely white male European-heritage staff play an increasing role in what conflicts we are aware of. In the end, it offers a powerful lesson in just how much Facebook controls what we see online.
UPDATE 5/12/2016: Facebook finally provided a response regarding Gizmodo's allegations and, after a subsequent leak of the internal reviewer documents to The Guardian, released its guidebook and several URL lists of the news media it considers. While Facebook officially denies conservative bias, the resulting list of what it considers reputable and national press would appear to lend weight to Gizmodo's claims of bias. See my followup "Does Facebook Suffer From Unconscious Bias? An Insider View Into Human Cataloging."
|
1014422144b227fab2f401ca2243e065 | https://www.forbes.com/sites/kalevleetaru/2016/05/12/does-facebook-suffer-from-unconscious-bias-an-insider-view-into-human-cataloging/ | Does Facebook Suffer From Unconscious Bias? An Insider View Into Human Cataloging | Does Facebook Suffer From Unconscious Bias? An Insider View Into Human Cataloging
A man walking past an office mural on the Facebook campus in Menlo Park, Calif. (AP Photo/Jeff Chiu)
On Monday Gizmodo cited anonymous former Facebook employees as claiming that Facebook was systematically biasing its Trending Topics section against conservative stories. In the aftermath of the flurry of news coverage and outrage, Facebook finally issued an official response that appeared to deny the claims, saying after an internal review it had “found no evidence” of the claims, but issued a second statement later on Tuesday clarifying that the review was not complete and that it was “continuing to investigate whether any violations took place.” In a sign of just how intrinsic Facebook has become to how we consume news, Senator Thune of the Senate’s Commerce Committee issued a letter to Facebook requesting that it provide his office with a number of data points regarding how its Trending Topic service operates.
It is worth noting that Facebook has never hidden the fact that humans are heavily involved with its trending feature. For example, Recode interviewed Facebook last year on how its trending service worked and described it this way: “Once a topic is identified as trending, it’s approved by an actual human being, who also writes a short description for the story. These people don’t get to pick what Facebook adds to the trending section. That’s done automatically by the algorithm. They just get to pick the headline.” Yet, the New York Times notes that Facebook has not gone out of its way to emphasize the role its human curators play, noting “Facebook has long described its trending feature as largely automatic” and “Facebook operates under a veneer of empiricism. Many people believe that what you see on Facebook represents some kind of data-mined objective truth unmolested by the subjective attitudes of fair-and-balanced human beings” and that it “is immune to bias because it is run by computers.”
Adding detail to this process, the New York Times interviewed two former employees of the service and said “they considered themselves members of a newsroom-like operation, where editorial discretion was not novel but was an integral part of the process” and that “’suppression’” was “based on perceived credibility — any articles judged by curators to be unreliable or poorly sourced, whether left-leaning or right-leaning, were avoided, though this was a personal judgment call.” Indeed, Facebook’s official response notes that reviewers are “reviewers are required to accept topics that reflect real world events, and are instructed to disregard junk or duplicate topics, hoaxes, or subjects with insufficient sources.”
In fact, The Guardian published this afternoon leaked internal documents that outline that humans play a much greater role in the editorial process than originally portrayed. The Guardian states “the company backed away from a pure-algorithm approach in 2014 after criticism that it had not included enough coverage of unrest in Ferguson, Missouri, in users’ feeds.”
For those of us in Washington used to hearing “we looked into the allegations and exonerated ourselves, trust us and move along” the Facebook response did little to calm concerns. Yet, absent from much of the commentary this week has been a thoughtful look at what it means to have human curation of algorithms, the kinds of unconscious bias that necessarily encroach into human filtering, the fact that algorithms themselves are built on a scaffolding of human coding and training bias that introduce bias at every stage and the myriad approaches that are used to assess and mitigate bias. As someone who has managed or audited numerous large human review projects involving news content similar in form and function to Facebook’s efforts, I can personally attest to the enormous complexities and routes through which bias encroaches upon results.
Key to the New York Times’ portrayal of the process, which is mirrored in the documents leaked to The Guardian and Facebook’s own response, is the concept of how intrinsic unconscious bias encroaches into human curation. The Times’ description of “unreliable or poorly sourced” stories and Facebook’s description of “junk or duplicate topics and “insufficient sources” all provide substantial latitude for the kind of bias claimed in the Gizmodo article. For example, it would be entirely conceivable that a staunchly liberal employee might view a report on Fox News as unreliably partisan, especially if it was not reported elsewhere, just as a conservative employee might view Huffington Post’s decision to cover Trump in its entertainment section rather than its politics section as an indicator that it was unreliably biased in its coverage of the election.
The fact of the matter is that all news outlets are biased in some fashion. Salon noted last year that the New York Times’ Upshot ran more stories about reclining airplane seats than payday loans and emphasized as a whole stories of interest more to the wealthy than to the poor.
More prominently, yesterday an ISIS terror attack killed more than 90 people and left at least 87 wounded in Iraq. Yet, the featured story on CNN’s homepage focused on Trump with the headline “Why won’t he release his taxes?” and a small link at left titled “64 killed; ISIS claims responsibility.” Contrast this with the November 13, 2015 terror attacks in Paris in which the entire CNN homepage was focused exclusively on the attacks with the breaking headline “60 killed in shootings and blasts” and continued for days. In both cases at the time of writing the casualty count stood at around 60, yet a terror attack in Iraq was deemed worthy of just the upper left within short order, while Paris became the centerpiece of the homepage for days.
Facebook faced a similar backlash of perceived bias last year when it activated its Safety Check feature for Paris but not Beirut. Responding to the barrage of criticism, Facebook drew a distinction between Paris and “other parts of the world, where violence is more common and terrible things happen with distressing frequency.” Many commentators noted that this represented a Western bias regarding societal normalcy – that in the West terror attacks are uncommon while in some parts of the world they may occur with “distressing frequency” but for those on the ground their desire to notify loved ones is no less pressing, even if it may appear so to a Western company.
The difference is that while the New York Times and CNN both serve American audiences with very specific demographics, Facebook aims to serve a global audience and that brings with it necessary conflicts between the demographics of its staff and the incredibly diverse demographics of the world it is attempting to serve. A small town newspaper in rural USA might consider a fire at the local gas station to be front page news worthy of days of coverage, while readers of the New York Times likely would find little import in such a story. News outlets thus filter the news for what they consider to be important to their local audiences. These local outlets have staff that live in the areas and deeply understand them. With Facebook’s global reach, this would require maintaining staff in every corner of the world that live and work in those communities in order to fully understand and appreciate the “newsworthiness” of each story to each community.
While there have been no allegations of this of any kind, imagine for a moment if Facebook’s human curators were exclusively white males in their early 20’s who all went to the same ivy league college together, all come from very wealthy families that summer in Hawaii and all live in the same neighborhood in New York City. A water main break on their street or fee increases to reserve a cabana in Hualalai might be deemed by them to be frontpage news, while changing fee structures in payday loans might not be considered newsworthy at all. While that’s an extreme example, the problem is that what is critically important to one person might be entirely irrelevant to another. Even when issues do capture global headlines, they tend to pass from memory quickly.
Facebook’s response to Gizmodo centers largely on its written guidelines, yet it ignores the fact that all human coding projects have two sets of guidelines that develop over time: the formal written guidelines developed by management and the informal unwritten guidelines that actually inform day-to-day operations. It would be impossible for any set of guidelines, no matter how long, to fully encompass the myriad of possibilities that a reviewer will come across in their work. Some projects try to address this with exhaustive guidebooks (I’ve seen ones 300+ pages long) where reviewers can’t possibly remember even a fraction of the details outlined within, but even these cannot capture every possible eventuality.
Typical guidebooks used for projects like Facebook’s tend to include language like “reputable outlet” or “cites authoritative sources” or “unbiased and nonpartisan viewpoint.” Yet, as I noted earlier, the notion of just what counts as a “reputable” outlet can be the subject of sharp debate. A conservative citing Fox News as reliable and nonpartisan and ignoring Huffington Post stories as poorly sourced in their mind and a liberal citing the Huffington Post as reliable and nonpartisan and ignoring Fox News stories as poorly sourced in their mind might each believe they were following the guidebook’s instructions to only link to reputable well sourced stories and thus believe they were being entirely unbiased in their work. Indeed, the anonymous Gizmodo source describes a work environment in which he or she appears to be at odds with colleagues over which stories are important. This would also explain why some reviewers claimed there was bias while others felt the trending feed was entirely unbiased.
In fact, The Guardian published this afternoon a leaked copy of the Facebook reviewer guidebook that in fact confirms that Facebook only lists stories as having “National Story” importance if they appear in at least 5 of the 10 outlets “BBC News, CNN, Fox News, The Guardian, NBC News, The New York Times, USA Today, The Wall Street Journal, Washington Post, Yahoo News or Yahoo.” What is particularly remarkable is that the only conservative outlet on the list is Fox News and that BuzzFeed News replaced Yahoo News, suggesting a shift towards click-intensive stories from world events. Facebook did not respond to a request for comment as to how it selected those 10 outlets or the rest of its "1K" media outlets.
It is particularly noteworthy that Facebook only assigns "National" prominence to stories appearing in those 10 outlets, which would automatically bias Facebook's feed against stories trending only in the conservative press and thus by themselves give enormous weight to Gizmodo's claims. In short, stories strongly resonating with conservatives, but not appearing in more general or liberal press, would therefore never be tagged as nationally important. In this way, to Facebook it likely believes it is entirely unbiased, while to a conservative, the choice of national outlets Facebook has settled on would likely offer clear evidence of anti-conservative bias.
The key unanswered question in all of the hyperbole rocketing around the web this week is thus to the degree to which Facebook acknowledges this unconscious bias and the processes and procedures it uses to attempt to measure and mitigate that bias as best it can. Having worked with or interacted with the internal processes of many Silicon Valley companies (though not Facebook), things like inter-coder and intra-coder reliability and metrics like Krippendorff’s alpha coefficient are traditionally foreign concepts. A quick scroll through any major archive of recent computer science papers will find a myriad papers using large teams of human reviewers (often recruited through services like Amazon’s Mechanical Turk or using university students) to catalog large volumes of material for training computer algorithms, but precious few that report in-depth inter- or intra-coder reliability measures or using rigorous mitigation processes. Having spoken at length on these issues to the technical community I often hear beliefs that biases wash out at scale or that bias can’t possibly impact data that much and a lack of experience or expertise in how to measure and mitigate bias.
The question therefore is how Facebook has handled the issue of unconscious bias in its human reviewers. I posed the following questions to Facebook in a request for comment earlier this week, but despite two days of repeated requests the company did not provide a response to any of them. I will update this post if I eventually receive a response. Given that Senator Thune has requested several of these data points in his own letter to Facebook, it is possible that some of these questions may be answered later this month. Facebook did late this afternoon publish a copy of what it claims is its guidebook, but Facebook did not immediately respond as to whether the guidebook published on their site reflects the entirety of the guidance provided to coders.
Employment Screening
Starting at the very first stage of the employment pipeline, how does Facebook screen potential curators and what are the employment criteria used to determine what makes for a good curator?
All robust human review projects include a screening exam as part of the interview process. Candidates are given a miniaturized version of the guidebook and asked to catalog a small set of sample articles according to the guidelines. This test is designed to assess how carefully a candidate reads an article and how they balance speed against accuracy. Edge cases assess how a candidate handles articles outside the confines of the guidebook examples by extrapolating the rules or asking for guidance. Often two very similar articles are provided to offer a very simple test of intra-coder reliability.
I was unable to find any public statements by Facebook regarding the kinds of screening it performs on its Trending Topics and other human curation employees, if any. I also asked Facebook whether, prior to the Gizmodo article’s publication, if it had systematically evaluated the potential biases of its curators or examined the question of bias in any meaningful way, but received no reply.
I was also unable to find any public statements by Facebook regarding the aggregate demographic breakdown of its curators in terms of age, gender, race, languages spoken, nationality, political affiliation, level of education and major, institution graduated from and so on. Such statistics would go a long way towards at least offering a glimpse at the backgrounds of those deciding what the world sees on Facebook each day. Gizmodo alleges the employees hail primarily from Ivy League and private East Coast universities - a claim which Facebook does not appear to have directly addressed in any of its public statements that I could locate. How many of its employees graduated from deep south universities, especially ones with conservative heritages? How many were born and raised outside the United States, speak languages other than English and maintain strong ties to non-Western areas of the world to ensure that trending topics reflect a more accurate representation of the world as a whole?
If Facebook were to release the precise screening materials it uses to evaluate candidates and the aggregate demographics of its screeners it would go a long way towards at least assessing the broad contours of how inclusive those employees are of the communities that use Facebook. According to The Guardian, the team has numbered as few as 12 people, making it unlikely that it is comprised of a richly diverse workforce representing individuals from all corners of the globe and political sphere.
Ongoing Assessment Of Inter-coder and Intra-coder Reliability
When using large teams of human reviewers, no matter how rigorous the employment screening is, how detailed the guidebooks or how lengthy the training process, humans still carry with them an incredible amount of innate bias that manifests itself over the course of their work. At the simplest level, a reviewer might favor speed over accuracy, striving to review as many articles per day while another might spend hours exhaustively researching an article before deciding its fate.
At a more intrinsic level, humans often have very different perspectives and interpretations of concepts like “newsworthiness” or “reliability” or “trustworthiness” based on their differing backgrounds and demographics. A group of 10 reviewers all given the same article might catalog it differently, with some discarding it as non-reliable or newsworthy and others tagging it as highly newsworthy. The concept of “inter-coder reliability” thus refers to the level of agreement in a group of reviewers – put another way, if a group of reviewers are all given the same article, how many of them will catalog it the same way.
Humans are of course highly fallible and subject to a wide variety of effects from fatigue to priming (in which seeing a set of articles in a particular order may affect the interpretation of the later articles compared with seeing them in a different ordering). Intra-coder reliability therefore refers to the level of agreement a person has with themselves over time. For example, a person might read an article on day and list it as non-newsworthy and when seeing the article the subsequent week list it as highly newsworthy.
Production workflows typically randomly inject test articles throughout the day or week, giving all reviewers the same article to review without them knowing whether a given article is a test article or a real article. Reviewers who systematically differ from the others or from themselves are pulled aside for retraining or further action. It is unclear what kind of inter- and intra-coding regimes Facebook uses (or if it assesses them at all) and if so what the scores are. If inter-coder reliability scores were high, it would suggest a highly uniform coding workflow, while if it was low, it would suggest there are strong disagreements among coders suggestive of a chasm that might explain Gizmodo’s report.
In fact, Facebook’s review guidelines make no mention of any kind regarding either form of reliability metric, suggesting at the very least that the company does not integrate such testing into its training.
Releasing The Data And Algorithmic Source Code
Given that Facebook claims its algorithm is designed to surface only those trends being discussed by large numbers of users on the platform, on the surface it would seem there would be little danger that releasing the algorithmic pseudocode would make it easier for users to game the results, or at least the precise list of inputs and their weights. One of the problems of the Facebook-is-biased-because-of-humans story is that algorithms themselves are highly biased. Algorithms are programmed by humans, encode beliefs and parameter selections made by humans and use training data provided by humans. Algorithms do not move us beyond humanistic biases – rather they learn and encode those biases under the veneer of silicon objectivity. Facebook also constantly changes its algorithms, meaning what led to a trending alert today might not quality for an alert tomorrow, creating an ever-changing landscape.
In its public response, Facebook noted that it maintains a log of every output of its algorithm and every action taken by its human reviewers. Given that the algorithm is designed to surface aggregate trends that will be made public and that reviewers are there simply to ensure the quality of the algorithmic results, there should be no privacy implications of any kind with releasing all of that data to the public. Releasing the master log of what the algorithm has surfaced and what the reviewers did with those recommendations is the only definitive way to determine the degree to which bias of any kind has influenced Facebook’s trends and to what degree.
For example, the demographics of social media users in the United States have tended to skew young and liberal and on Facebook women tend to post more heavily than men. Could it be that perceived liberal bias on the trending topics page is simply due to Facebook having a liberal skew in its user base that might therefore discuss more liberal topics and result in primarily liberal-leaning topics being surfaced by the algorithms? Having access to the underlying algorithm and trending topics log files would be able to answer these questions.
Perhaps most striking about Facebook’s public statements about the Gizmodo article is the lack of hard numbers behind their assurances that there has been no bias. Data forms the lifeblood of Facebook’s existence and it maintains a vibrant data sciences group that actively publishes to the academic literature on issues of bias, including controversial studies like manipulating users’ emotions or inducing them vote, as well as publishing regular analytic studies on topics like voter turnout. Such studies are filled with reams of statistics, confidence tests and other metrics that provide insight into the arguments they make. It is therefore all the more remarkable that in the face of such serious accusations as those made by Gizmodo, that the platform offered only muted assurances that it had not found any bias, rather than providing even the most cursory of hard numbers to back up its assurances or even a basic explanation of how it had investigated the claims. It is unlikely that in the space of a single day Facebook had manually checked every single decision ever made by its human reviewers, so what sample size did it review and how did it review them? If, as Gizmodo claims, there is a liberal bias among its reviewers, then simply having those reviewers review their decisions would almost certainly result in a finding of non-bias.
It is unclear why Facebook does not simply release the raw data from its trending topics logs and/or the pseudocode of its underlying algorithm. Releasing these to the research community would go a long way towards helping the public’s understanding of the influences into what they see online and either definitively refute or confirm the influence of conscious and unconscious bias in those results.
In fact, it was only after The Guardian published this afternoon the internal leaked reviewer guidelines that Facebook officially released its guidelines. The company did release a list of news outlets that it considers as high value. Yet, the list begs more questions than it answers. For example, not a single major Estonian news outlet appears on the list and major outlets from countless other countries are also missing, suggesting overall a very strong Western bias to the outlets it considers to be reputable. Numerous conservative news outlets are also missing from the list, though tier one outlets like the Drudge Report do appear. Its official guidelines also state it uses RSS crawling to monitor those outlets, even while RSS feeds are no longer representative of much of the world’s news media, though it is unclear whether they were simply generalizing.
Rather than the reams of statistics and visualizations that Facebook typically releases for the most mundane of studies, such as how children and parents interact on the platform, it is noteworthy that the platform has to date released no hard numbers on how it assesses bias on its platform. It is especially noteworthy that its reviewer guidelines make no mention of inter- or intra-coder reliability assessment or other measures used to assess bias and Facebook did not respond to repeated requests for comment on whether it assesses any form of reliability metric.
Perhaps the biggest story here, though is one of leaks. Despite the academic and journalistic communities asking Facebook for years to provide more insight into how it operates services like its Trending Topics system, Facebook has steadfastly refused, provided only very cursory glimpses at its systems. It took an anonymous leaker to break the veil of secrecy open, which started a cascade of former employees coming out of the woodwork and then the first batch of leaked internal documents to force Facebook to take the first steps towards shedding some light on how it operates.
The story that is emerging, alleging a traditional editorial news room with a small set of as few as 12 editors deciding what more than 1.2 billion people see, stands at stark odds with the public’s perception of massive data-drenched algorithms providing unbiased views into the world’s conversation. Will Facebook represent a single one-off or is it the start of a new post-Wikileaks era in which corporate secrets will be a thing of the past and insiders will reveal all the world’s deepest secrets?
In the end, perhaps the biggest story is that the general public is seeing firsthand that even Silicon Valley is often powered at the end of the day by a room full of humans behind the algorithmic curtain.
|
60d96ec7511436e503cc949e3af4dd77 | https://www.forbes.com/sites/kalevleetaru/2016/06/14/will-ai-and-robots-make-humans-obsolete/ | Will AI And Robots Make Humans Obsolete? | Will AI And Robots Make Humans Obsolete?
A robot from the Terminator Exhibition in Tokyo in 2009 (YOSHIKAZU TSUNO/AFP/Getty Images)
Last week Swiss voters overwhelmingly rejected a proposal to provide a guaranteed income for all citizens regardless of whether they were employed or not. What made the proposal so remarkable is the debate that led up to it. In short, a big driving force of the proposal was that in a world of increasing automation through software, robotics and AI, human jobs may quickly be coming to an end and thus governments will need to step forward and pay their citizens to just sit at home while machines perform all societal functions. Do humans have a useful future in world where machines do all the work?
The debate over automation versus human jobs is a long and storied one, with a myriad arguments and evidence on either side. Yet, what has fundamentally changed in the last few years is that for the first time in our planet’s history, machines are becoming intelligent. To date automation has largely displaced rote and mechanically-focused jobs. Jobs that required human reasoning have largely been left alone. The rise of AI threatens to upend all of that.
So-called machine learning and machine intelligence has historically been exceptionally limited. Machines could learn powerful patterns from data, but only through the lens of predefined collections of filters and using algorithms and input data selected by their human trainers. The best a machine algorithm could strive towards was pattern matching, limiting their application beyond very narrow domains with precise constraints on environmental fluctuations. The notion of machine “creativity” was largely relegated to using random number generators to create randomized permutations and transitive chains of preexisting options chosen by a human.
This past March, Google’s AlphaGo system changed all of that, exhibiting for the first time the earliest glimmers of true machine “creativity.” In a nutshell, Google had its algorithms watch every recorded Go game they could feed into it, allowing it to observe how the best Go players in history have sparred. But, this alone would limit the system to merely selecting from the best that humans have achieved, rather than allowing the machine to spread its wings and achieve what no human has accomplished. To do this, the engineers took this Go system that had been trained on human games and made an exact copy of it. They then connected the two copies and had them play each other – two titans with a perfect memory of every Go move ever recorded locked in an epic battle spanning millions of games. These two AI systems were able to step out of the intellectual shadows of their human masters by reaching back into their massive memories and trying to outmaneuver each other in ever-more-complex and creative combinations of strategies, all while learning from their opponent to create the ultimate Go player. The engineers then built a final version of their AI system that encoded all of the knowledge learned by these two AI titans over their millions of games.
In the final showdown with the world’s best human Go player, this AI system did something incredible: it came up with a move, Move 37, that no human in the recorded history of Go had ever thought up. In short, an AI machine outsmarted a human using creativity rather than brute force memorization.
On a more mundane level, the exponential growth of deep learning systems over the last several years, powered by massive advances in hardware capability have proven incredibly adept at everything from image recognition to vehicle driving, in many cases outperforming skilled humans at the same tasks. AI systems are running physics experiments, creating artwork and even being put through the same behavioral psychological tests used on mice.
Uber and its peers have fundamentally disrupted the taxi industry, using mobile and cloud technology to rewrite the rules of how we get around. Yet, for every taxi cab pulled off the street, an army of new drivers takes its place in Uber’s workforce. In many ways, one could argue that Uber has simply created a single global taxi company to take the place of the complex patchwork of tens or hundreds of thousands of small local firms of the past. At the end of the day there is still a human sitting behind the wheel of that vehicle and earning a paycheck.
Yet, Uber publicly announced this past May that it too has jumped wholeheartedly onto the driverless car bandwagon with a goal of ultimately replacing its army of human-driven vehicles with a fully autonomous fleet. In the space of less than a decade, Uber has upended the entire global taxi market that has stood for centuries and perhaps soon will upend the very idea of a human driver. Personal digital assistants like Google Now, Siri, Echo and Cortana are leveraging similar technology to robustly recognize human voice, understand and reason about ever-more complex tasks and perform an ever-growing array of tasks, replacing the traditional human secretary. At each step, technology is lowering the price of access and opening the universe of those who can afford to access such capabilities, but at the cost of eliminating more and more human jobs.
What will the future hold when machines can perform every human task? Will we all just sit around at the beach, Pina Colada in hand, while robots and AI manage our existence on the planet? What happens when machines become more intelligent than humans and reach technological singularity in which they no longer need humans and perhaps even compete with them for resources? This of course is a tremendously popular topic in the genre of science fiction, with any number of visions from the extinction of human kind to a symbiotic coexistence.
Whatever the outcome, last week’s Swiss vote suggests that the ramifications of the growing world of robotic and AI automation and their entwinement in the raging debate over globalization are more important than ever. Whether AI becomes man’s best friend or his replacement remains to be seen, but whatever the outcome may be, we are rushing towards it faster than ever.
Check out our tech podcast The Premise on the future of AI:
|
784707613d37a42e0c1ec0f4514cca63 | https://www.forbes.com/sites/kalevleetaru/2016/07/31/why-apples-patent-to-disable-your-phones-camera-is-so-1984/ | Why Apple's Patent To Disable Your Phone's Camera Is So 1984 | Why Apple's Patent To Disable Your Phone's Camera Is So 1984
A protester holds a copy of 1984 in Germany during a protest. (Adam Berry/Getty Images)
Last month headlines buzzed with Apple’s new patent for an infrared signal that would instantly disable the cameras on all equipped cellphones in the vicinity, preventing anyone from photographing or video recording in the direction of the signal. In the patent filing Apple uses the example of a music concert in which the venue or band prohibits recording of their shows, but attendees nonetheless live stream the concert to the world for free or take copious photographs of video clips and share them on social media. Using Apple’s device, venues could simply forcibly disable all phone cameras in the venue, rendering the issue moot. On the surface this sounds like a fantastic idea, but what about the unintended consequences?
Once it becomes possible to remotely deactivate all cell phone cameras in an area, it is not a stretch to imagine governments and police forces leveraging the technology. Today social movements like Black Lives Matter use social media to broadcast police interactions and live stream their protests. If Apple’s technology becomes mainstream, one could imagine police forces equipping every officer and squad car with the device set to block all citizen recording of police activity. One could imagine repressive governments prepositioning the devices to blanket every public square and major roadway across the nation and activating the network during times of public unrest to instantly silence the iconic citizen imagery that has come to define modern uprisings. The Guardian notes that smartphone use is actually prohibited in the US House and Senate chambers, meaning such technology might be deployed in future to prevent members from live streaming protests as happened last month when Democrats staged a sit-in and House Speaker Paul Ryan ordered the House’s broadcast cameras turned off, with the protesters simply live streaming their sit-in over social media instead.
Moreover, one can imagine the concept being taken further, with future jammers able to selectively disable any feature on the phone or turn them off entirely. Fine dining establishments would likely jump at the ability to install a device that would mute every phone in the restaurant, forcing patrons to step outside to take a call, or blocking all phone use other than calls to 911. Similarly, in major counterterrorism or police actions, police now routinely ask citizens and the media not to live broadcast or discuss what they see in certain areas in order not to tip off criminals or terrorists to where police are going. In extraordinary cases even the US Government has been known to deploy mobile jamming equipment to block phone use in special exclusion zones, but such devices are highly controversial as they also block calls to 911. A portable device that signals all phones in an area to turn off except for 911 calls would likely be a go-to device for many security services.
Taking this a step further, one could imagine future variants that selectively disable the use of data services in the area or block access to social media services in the area. Blocking social media sites at the national ISP level during periods of unrest has become a common tactic in many countries, but requires extensive coordination with internet companies, national infrastructure providers and broad legal authorities and coercion. In contrast, if the government just has to point a transmitter at a public square to instantly cut off all social media use or all mobile data use in the area, it is hard to envision that technology not becoming widely deployed.
One could also imagine the opposite – signals that trigger all phones in the area to transmit their GPS coordinates or turn on their microphones and cameras to listen for gunshots or a particular person’s voice ala Batman’s The Dark Knight. The Snowden disclosures have broadened awareness of what skilled adversaries like the NSA can do when they target an individual phone, but one can easily see governments requesting those features be baked into consumer devices in manufacturing to make it far easier to use them at scale as a routine matter.
Much like the encryption debate, even if Apple and other major manufacturers blocked encryption on their devices or implemented such “remote kill” features, it is likely that other companies would step forward with replacement devices that did not have the kill feature or which did offer end-to-end encryption, but the general public would likely still flock to the more mainstream devices they were familiar with.
What makes this patent perhaps most intriguing is Apple’s staunch public stance against any attempts of outside intrusion against its users, most famously in its legal battle with the US Government to oppose weakening of encryption standards. Yet, the same company has patented a system that would allow anyone to instantly disable the cameras on every iPhone in the vicinity, placing remote control over its customers’ devices into someone else’s hands.
It is unclear whether this is merely a defensive patent or whether Apple is actively planning to deploy it in their devices and the company did not respond to a request for comment on what steps it was planning to take to ensure it could not be used to block legal activities like the legal recording of police activity. Apple also did not respond to a request for comment on whether a high-powered version of the device could be mounted to a drone and used as a rapidly deployable portal jamming device over protests or major police actions.
While it remains to be seen whether Apple’s patent ultimately comes to fruition, it offers a frighteningly 1984 view of the future of digital society in which all of the devices and technologies we’ve come to embrace and believe are “ours” can now be taken away from us with the click of a button. This is especially relevant as more and more of our information, from books to movies to songs to news articles are digitally delivered via the ephemeral cloud. In the past even if the government banned a particular book or burned all copies of a newspaper that published an unflattering article, copies still existed in myriad personal bookshelves. Today all those copies sit in centralized cloud repositories and can be removed with a click, disappearing forever from access or existance. As standalone cameras have been replaced with internet-connected all-in-one smartphones, suddenly even our ability to capture and talk to the world has become part of the all-encompassing cloud and placed the control over our devices in the hands of others.
Whether this proves to be just a bit of unfounded hysteria or the dark glimmers of the dystopia to come, the great lengths governments have gone to censor communications in the digital era do not bode well for the likelihood that this technology will not come to pass. Could it be that all this time we’ve been bankrolling the world’s greatest surveillance network and getting rid of anything not connected to the network, creating a world in which the government or even private companies or individuals can simply disable our connection to the digital world and our ability to record and communicate with a mouse click?
Whatever the future to come, Apple’s new patent should serve as a wakeup call and reminder to the world’s citizens that the devices we fill our lives with today don’t actually belong to us and can ultimately be made to serve others against our interests.
|
26ac65a3808f2ccd333f0ca1b19f5d2c | https://www.forbes.com/sites/kalevleetaru/2016/08/23/how-cell-phones-can-map-the-cia-is-location-secrecy-dead/ | How Cell Phones Can Map The CIA: Is Location Secrecy Dead? | How Cell Phones Can Map The CIA: Is Location Secrecy Dead?
A map at the Big Bang Data exhibition at Somerset House in 2015. (Peter Macdiarmid/Getty Images for... [+] Somerset House)
Last week I asked whether government secrecy is dead in the internet and social media era. Yet, the web is just one of the myriad commercially-owned data streams that poses a challenge to government secrecy in modern times. The nearly ubiquitous presence of cellphones in daily life and the exquisitely detailed biographical record they create offers an even greater threat to securing the daily operations of government.
Perhaps most famously, in 2003 CIA personnel in Italy conducting an extraordinary rendition were unmasked through cellphone records. Italian prosecutors investigating the kidnapping requested from the local cellular companies a list of all cellphones that had been present in the vicinity of the kidnapping around the time it happened. From this list they identified a set of cellphones present at that time and location and that also had repeatedly called each other, CIA headquarters, the CIA station chief in Milan and the nearby US Air Force base the kidnapee was flown out of. From those cellphone billing records they were able to identify the agents’ covert identities and trace their movements in the country. All of this just from the simple cellphones each of the agents carried with them.
Yet, what makes this example so fascinating is that it plays out every day here at home. For those in the DC area, the next time you’re stuck in rush hour traffic outside CIA headquarters, play a game with your carmates and count how many of the vehicles exiting onto Dolley Madison Blvd start talking on their cellphones within a short distance. In fact, watch the traffic emerging from any sensitive government facility at shift change and you will see a steady stream of employees arriving or heading home talking on their cellphones.
Whether personal or government-issued, each of those phones ultimately connect to the cellular towers and networks owned by private cellular companies. Taken together, the databases of the major cellular providers in the US have a complete map of the entire US Government footprint in the US, including its most classified and secretive facilities and their workers.
To map the US defense community, one just has to start with the headquarters of the CIA, NSA, NGA and other major intelligence agencies whose headquarters are well known and publicly disclosed. This can be easily expanded by asking any DC taxi driver the addresses he/she has dropped passengers off at that were protected by heavily armed guards (amazingly this actually works – almost any taxi driver knows the way to CIA headquarters by heart and many will happily rattle off the myriad strange places and nondescript buildings they’ve dropped passengers at that were heavily guarded by people with large machine guns or which had other elaborate security procedures - indeed, taxi drivers are often one of the best sources of information when first arriving in a town).
From this map, one has only to cross-reference this database against the list of all cellphones that regularly drive to those locations and switch off each morning and then turn on again each evening or over lunch. With just a few analyses you have just created a fairly extensive database of intelligence community employees. Taking this list of cellphones, one has only to map the other locations those phones regularly visit during business hours to quickly map a richly detailed catalog of classified facilities. Segmenting cellphones that regularly visit the White House, EEOB or Congress from those that visit the Pentagon and military facilities from those that visit other intelligence or unmarked facilities can also further characterize the class and type of each person and facility.
Moreover, with widely publicized vulnerabilities in the SS7 brokerage system and the availability of IMSI catchers like Stingrays, it would theoretically be possible to track this intelligence community phone directory throughout the country as they go about their daily business.
In fact, cellphones are not the only data point connecting these myriad facilities. An observer with good eyes will often recognize cars spotted regularly turning off the road towards CIA or NSA headquarters appearing in concentration in the parking lots of certain nondescript office parks in the area. While at present there are no permanent listed traffic cameras in the vicinity of CIA headquarters, many other government facilities do have entrances located beside intersections with traffic cameras or other privately owned surveillance camera networks. Given that such cameras are increasingly being found to be vulnerable to remote access, one could imagine a foreign adversary cataloging the live movements of intelligence community employees across the capital region and around major military and other government installations across the US simply by tracking license plates.
In fact, automated license plate scanning databases are increasingly privately owned, with one company holding more than 4.2 billion records recording that a given vehicle was seen at a particular location at a particular time and growing by more than 120 million new records per month. Another company offers rich biographical reports that incorporate license plate scans recorded by private companies. Given the vast reams of such data now in private hands, it is likely that such databases will increasingly become espionage targets in the future.
In fact, location data is becoming a hotly desirable commodity in the marketing world. Take for example a family whose mobile service is provided by Verizon and who decides to sign up for Verizon’s Smart Rewards program along with Verizon Select. This authorizes a wealth of subscriber data from device location to browsing activity to home address, demographics and CPNI data to be used to better target advertising. While the data itself is not shared with advertisers, it does provide insight to the wealth of data mobile companies have on their customers. Location data is of particular value, warranting an entire section in the program’s FAQ page. Even cautious users who ensure GPS and Location-based services are turned off on their phone are not immune, as “Verizon Selects uses location information that Verizon collects from our network and is not related to the location settings on your device.”
While Verizon does not share an individual subscriber’s information with its advertising network and users can turn off sharing sensitive data, the company confirmed by email that advertisers can use this data to create geofenced advertisements that only appear to users in particular areas. In this way it would be possible to create advertisements that only appear to people who visit CIA headquarters, allowing hyper targeted messaging to the US intelligence community seen only by those who work at a particular facility.
Putting all of this together, whether using a government-issued cellphone or their own personal device, government employees today carry the equivalent of a 24/7 tracking beacon that broadcasts their locations in realtime to private companies. Cross-referencing that data with a small number of publicly known building locations can be used to rapidly map much of the US defense community’s most sensitive facilities and track its workforce in realtime, even offering insights into emerging crises as employees at a particular facility work late into the night, much as DOD watchers used to watch pizza deliveries to flag when teams were working through the night on a crisis.
In today’s everything-is-hackable world, this suggests that infrastructure providers like cellular companies, traffic camera networks and license plate scanner databases will increasingly become targets of especial espionage interest to foreign adversaries. Once again, this all goes to show that not only is privacy becoming a figment of historical imagination, but that even the dystopia of 1984 didn’t come close to the world we’re rapidly heading towards.
|
76197bd12b39532463a39a71ebff8a2f | https://www.forbes.com/sites/kalevleetaru/2016/10/20/julian-assanges-internet-access-and-how-facebook-could-be-the-end-of-wikileaks/?ss=Security | Julian Assange's Internet Access And How Facebook Could Be The End Of Wikileaks | Julian Assange's Internet Access And How Facebook Could Be The End Of Wikileaks
Wikileaks founder Julian Assange seen through a video camera eyepiece during its 10th anniversary... [+] celebration in Berlin (STEFFI LOOS/AFP/Getty Images)
As Ecuador confirmed to the world that it had temporarily suspended Julian Assange’s embassy-provided internet connection, rumors swirled about possible US involvement and the future of the organization if its leader-from-afar was permanently cut off from the web. Yet, as Wikileak’s latest releases continued this week unabated, it also demonstrates the power of a decentralized and ephemeral organization in the era of the web – merely disconnecting its leader has little impact on its ability to obtain and release materials to the world.
It is not without a fair degree of irony that many of the technologies that make it possible for organizations like Wikileaks to accept and distribute leaked information throughout the world are the same that the United States baked into the underpinnings of the web and over the years have helped develop and promote as tools to facilitate secure communication that even repressive regimes at odds with the US could not stop. It also reflects the challenges of a digital era in which a hacker on the other side of the planet can penetrate the emails and files of some of the most powerful people in the world and redistribute their most confidential records to the world without any way of stopping them.
Take the anonymizing Tor browser, which has received extensive funding from the US Government over the years as a tool to allow dissidents abroad in repressive regimes to access restricted information and to communicate with the outside world. Yet, as the Snowden revelations show, Tor has become an increasing obstacle for US intelligence agencies monitoring targets around the world, while Snowden himself used the tool to secure his communications with journalists. The FBI has spent considerable efforts attempting to weaken and crack the very anonymizing protections that the US Government built Tor to provide, while the service was even used to penetrate Hilary Clinton’s email server.
Even while the United Nations has promoted freedom of internet access as a human right, the US has increasingly discussed the use of offensive cyber-attacks to disrupt the communications of enemies in ways that would likely cause widespread collateral damage to civilian internet access. After all, cutting off Islamic State communications has the side effect of cutting off access to the innocent civilians trapped in their territories and preventing them from accessing information from the outside and providing useful intelligence on the state of affairs in their areas.
But, perhaps most fascinating about Assange affair is that in the era of decentralized web-based organizations like Wikileaks and Anonymous, it is difficult to stifle speech you disagree with. While you can launch a cyberattack that takes down a few of their servers or place political pressure on foreign governments to arrest or silence select leadership, the ability to copy digital files to physical servers all across the globe, available for public download or sitting silently waiting for the flick of a deadman’s switch to burst into visibility, it becoming ever more difficult for nations to silence their critics.
At the same time, China’s continued mastery of Internet censorship reminds us that censorship is here to stay. Even while the pundits predicted that China’s great firewall would eventually fall under the crushing weight of a globalized Internet, the government’s censorship efforts have only strengthened over the past two decades, proving that with the right political will the free Internet can be kept at bay.
However, perhaps the greatest threat to the freedom of the Internet and to the future of organizations like Wikileaks lies in the rise of the walled gardens of today’s social media giants. The web of old was comprised of a myriad physical servers scattered across the planet and where anyone could simply install the right software onto their desktop or laptop or even Raspberry Pi and run a web server offering files for download. This decentralized global network made the task of removing content a nearly impossible global game of whack-a-mole. Once a stolen or embarrassing file was published to the Internet, it was there to stay forever. Even the EU's “right to be forgotten” ruling could not truly purge a document from the web, only make it more difficult to find.
However, as social media platforms increasingly become our gateways to the web, they are recentralizing the distributed web. As Facebook breaks 1.7 billion users worldwide, every one of those users in every country of the world are subject to a single centralized set of rules governing what is acceptable content that can be posted on Facebook’s pages. Last month’s Aftenposten censorship row served as a powerful wakeup call to just how far afield Facebook’s acceptable content guidelines have drifted from the traditional press freedoms that Western news agencies have enjoyed for decades.
One could easily imagine Facebook eventually developing guidelines that prohibit the redistribution of stolen materials, perhaps at the request of US law enforcement or certain foreign governments as a condition of operating in those nations. Much like internet companies today use automated filtering software to remove illegal content, it would not take much modification for social media companies to install similar filtering to remove all links to stolen documents and all discussion of them. In such a world, Wikileaks would essentially cease to exist for the average person, as even news coverage of Wikileaks content would simply be filtered out of their daily news feed, making them entirely oblivious to the organization’s latest data dumps.
Whatever happens, it appears certain that the future of organizations like Wikileaks are firmly tied to the future of the web and whether social media evolves to become the web itself.
|
1aaeedca953adbb36be5b88079b6a4f7 | https://www.forbes.com/sites/kalevleetaru/2016/12/22/the-daily-mail-snopes-story-and-fact-checking-the-fact-checkers/ | The Daily Mail Snopes Story And Fact Checking The Fact Checkers | The Daily Mail Snopes Story And Fact Checking The Fact Checkers
Letters arranged to spell "facts." (Shutterstock)
Yesterday afternoon a colleague forwarded me an article from the Daily Mail, asking me if it could possibly be true. The article in question is an expose on Snopes.com, the fact checking site used by journalists and citizens across the world and one of the sites that Facebook recently partnered with to fact check news stories on its platform. The Daily Mail’s article makes a number of claims about the site’s principles and organization, drawing heavily from the proceedings of a contentious divorce between the site’s founders and questioning whether the site could possibly act as a trusted and neutral arbitrator of the “truth.”
When I first read through the Daily Mail article I immediately suspected the story itself must certainly be “fake news” because of how devastating the claims were and that given that Snopes.com was so heavily used by the journalistic community, if any of the claims were true, someone would have already written about them and companies like Facebook would not be partnering with them. I also noted that despite having been online for several hours, no other major mainstream news outlet had written about the story, which is typically a strong sign of a false or misleading story. Yet at the same time, the Daily Mail appeared to be sourcing its claims from a series of emails and other documents from a court case, some of which it reproduced in its article and, perhaps most strangely, neither Snopes nor its principles had issued any kind of statement through its website or social media channels disclaiming the story.
On the surface this looked like a classic case of fake news – a scandalous and highly shareable story, incorporating official-looking materials and sourcing, yet with no other mainstream outlet even mentioning the story. I myself told my colleague I simply did not know what to think. Was this a complete fabrication by a disgruntled target of Snopes or was this really an explosive expose pulling back the curtain on one of the world’s most respected and famous fact checking brands?
In fact, one of my first thoughts upon reading the article is that this is precisely how the “fake news” community would fight back against fact checking – by running a drip-drip of fake or misleading explosive stories to discredit and cast doubt upon the fact checkers.
In the counter-intelligence world, this is what is known as a “wilderness of mirrors” – creating a chaotic information environment that so perfectly blends truth, half-truth and fiction that even the best can no longer tell what’s real and what’s not.
Thus, when I reached out to David Mikkelson, the founder of Snopes, for comment, I fully expected him to respond with a lengthy email in Snopes’ trademark point-by-point format, fully refuting each and every one of the claims in the Daily Mail’s article and writing the entire article off as “fake news.”
It was with incredible surprise therefore that I received David’s one-sentence response which read in its entirety “I'd be happy to speak with you, but I can only address some aspects in general because I'm precluded by the terms of a binding settlement agreement from discussing details of my divorce.”
This absolutely astounded me. Here was the one of the world’s most respected fact checking organizations, soon to be an ultimate arbitrator of “truth” on Facebook, saying that it cannot respond to a fact checking request because of a secrecy agreement.
In short, when someone attempted to fact check the fact checker, the response was the equivalent of “it's secret.”
It is impossible to understate how antithetical this is to the fact checking world, in which absolute openness and transparency are necessary prerequisites for trust. How can fact checking organizations like Snopes expect the public to place trust in them if when they themselves are called into question, their response is that they can’t respond.
When I presented a set of subsequent clarifying questions to David, he provided responses to some and not to others. Of particular interest, when pressed about claims by the Daily Mail that at least one Snopes employee has actually run for political office and that this presents at the very least the appearance of potential bias in Snopes’ fact checks, David responded “It's pretty much a given that anyone who has ever run for (or held) a political office did so under some form of party affiliation and said something critical about their opponent(s) and/or other politicians at some point. Does that mean anyone who has ever run for office is manifestly unsuited to be associated with a fact-checking endeavor, in any capacity?”
That is actually a fascinating response to come from a fact checking organization that prides itself on its claimed neutrality. Think about it this way – what if there was a fact checking organization whose fact checkers were all drawn from the ranks of Breitbart and Infowars? Most liberals would likely dismiss such an organization as partisan and biased. Similarly, an organization whose fact checkers were all drawn from Occupy Democrats and Huffington Post might be dismissed by conservatives as partisan and biased. In fact, when I asked several colleagues for their thoughts on this issue this morning, the unanimous response back was that people with strong self-declared political leanings on either side should not be a part of a fact checking organization and all had incorrectly assumed that Snopes would have felt the same way and had a blanket policy against placing partisan individuals as fact checkers.
In fact, this is one of the reasons that fact checking organizations must be transparent and open. If an organization like Snopes feels it is ok to hire partisan employees who have run for public office on behalf of a particular political party and employ them as fact checkers where they have a high likelihood of being asked to weigh in on material aligned with or contrary to their views, how can they reasonably be expected to act as neutral arbitrators of the truth?
Put another way, some Republicans believe firmly that climate change is a falsehood and that humans are not responsible in any way for climatic change. Those in the scientific community might object to an anti-climate change Republican serving as a fact checker for climate change stories at Snopes and flagging every article about a new scientific study on climate change as fake news. Yet, we have no way of knowing the biases of the fact checkers at Snopes – we simply have to trust that the site’s views on what constitutes neutrality are the same as ours.
When I asked for comment on the specific detailed criteria Snopes uses to screen its applicants and decide who to hire as a fact checker, surprisingly David demurred, saying only that the site looks for applicants across all fields and skills. He specifically did not provide any detail of any kind regarding the screening process and how Snopes evaluates potential hires. David also did not respond to further emails asking whether, as part of the screening process, Snopes has applicants fact check a set of articles to evaluate their reasoning and research skills and to gain insight into their thinking process.
This was highly unexpected, as I had assumed that a fact checking site as reputable as Snopes would have a detailed written formal evaluation process for new fact checkers that would include having them perform a set of fact checks and include a lengthy set of interview questions designed to assess their ability to identify potential or perceived conflicts of interest and work through potential biases.
Even more strangely, despite asking in two separate emails how Snopes assesses its fact checkers and whether it performs intra- and inter-rater reliability assessments, David responded only that fact checkers work together collaboratively and did not respond to further requests for more detail and did not answer whether Snopes uses any sort of assessment scoring or ongoing testing process to assess its fact checkers.
This raises exceptionally grave concerns about the internal workings of Snopes and why it is not more forthcoming about its assessment process. Arguing that because multiple fact checkers might work on an article, reliability is not a concern, is a false argument that shows a concerning lack of understanding about reliability and accuracy. Imagine a team of 50 staunch climate deniers all working collaboratively to debunk a new scientific study showing a clear link between industrial pollution and climate change. The very large team size does not make up for the lack of diversity of opinion. Yet, David provided no comment on how Snopes does or does not explicitly force diversity of opinion in its ad-hoc fact checking teams.
A robust human rating workflow must regularly assess the accuracy and reproducibility of the scores generated by its human raters, even when they work collaboratively together. Typically this means that on a regular basis each fact checker or fact checker team is given the same article to fact check and the results compared across the groups. If one person or group regularly generates different results from the others, this is then evaluated to understand why. Similarly, an individual or group is also periodically given the same or nearly identical story from months prior to see if they give it the same rating as last time – this assesses whether they are consistent in their scoring.
More troubling is that we simply don’t know who contributed to a given fact check. David noted that Snopes’ “process is a highly collaborative one in which several different people may contribute to a single article,” but that “the result is typically credited to whoever wrote the initial draft.” David did not respond to a request for comment on why Snopes only lists a single author for each of its fact checks, rather than provide an acknowledgement section that lists all of the individuals who contributed to a given fact check.
One might argue that newspapers similarly do not acknowledge their fact checkers in the bylines of articles. Yet, in a newspaper workflow, fact checking typically occurs as an editorial function, double checking what a reporter wrote. At Snopes, fact checking is the core function of an article and thus if multiple people contributed to a fact check, it is surprising that absolutely no mention is made of them, given that at a newspaper all reporters contributing to a story are listed. Not only does this rob those individuals of credit, but perhaps most critically, it makes it impossible for outside entities to audit who is contributing to what fact check and to ensure that fact checkers who self-identify as strongly supportive or against particular topics are not assigned to fact check those topics to prevent the appearance of conflicts of interest or bias.
If privacy or safety of fact checkers is a concern, the site could simply use first name and last initials or pseudonyms. Having a master list of all fact checkers contributing in any way to a given fact check would go a long way towards establishing greater transparency to the fact checking process and Snopes’ internal controls on conflict of interest and bias.
David also did not respond to a request for comment on why Snopes fact checks rarely mention that they reached out to the authors of the article being fact checked to get their side of the story. Indeed, Journalism 101 teaches you that when you write an article presenting someone or something in a negative light, you must give them the opportunity to respond and provide their side of the story. Instead, Snopes typically focuses on the events being depicted in the article and contacts individuals and entities named in the story, but Snopes fact checks typically do not mention contacting the authors of the articles about those events to see if those reporters claim to have additional corroborating material, perhaps disclosed to them off the record.
In essence, in these cases Snopes performs “fact checking from afar,” rendering judgement on news stories without giving the original reporters the opportunity for comment. David did not respond to a request for comment on this or why the site does not have a dedicated appeals page for authors of stories which Snopes has labeled false to contest that label and he also did not respond to a request to provide further detail on whether Snopes has a written formal appeals process or how it handles such requests.
Putting this all together, we simply don’t know if the Daily Mail story is completely false, completely true or somewhere in the middle. Snopes itself has not issued a formal response to the article and its founder David Mikkelson responded by email that he was unable to address many of the claims due to a confidentiality clause in his divorce settlement. This creates a deeply unsettling environment in which when one tries to fact check the fact checker, the answer is the equivalent of “its secret.” Moreover, David’s responses regarding the hiring of strongly partisan fact checkers and his lack of response on screening and assessment protocols present a deeply troubling picture of a secretive black box that acts as ultimate arbitrator of truth, yet reveals little of its inner workings. This is precisely the same approach used by Facebook for its former Trending Topics team and more recently its hate speech rules (the company did not respond to a request for comment).
From the outside, Silicon Valley looks like a gleaming tower of technological perfection. Yet, once the curtain is pulled back, we see that behind that shimmering façade is a warehouse of good old fashioned humans, subject to all the same biases and fallibility, but with their results now laundered through the sheen of computerized infallibility. Even my colleagues who work in the journalism community and by their nature skeptical, had assumed that Snopes must have rigorous screening procedures, constant inter- and intra-rater evaluations and ongoing assessments and a total transparency mandate. Yet, the truth is that we simply have no visibility into the organization’s inner workings and its founder declined to shed further light into its operations for this article.
Regardless of whether the Daily Mail article is correct in its claims about Snopes, at the least what does emerge from my exchanges with Snopes’ founder is the image of the ultimate black box presenting a gleaming veneer of ultimate arbitration of truth, yet with absolutely no insight into its inner workings. While technology pundits decry the black boxes of the algorithms that increasingly power companies like Facebook, they have forgotten that even the human-powered sites offer us little visibility into how they function.
At the end of the day, it is clear that before we rush to place fact checking organizations like Snopes in charge of arbitrating what is “truth” on Facebook, we need to have a lot more understanding of how they function internally and much greater transparency into their work.
|
440baec91d029d64d7d13af36c1afd15 | https://www.forbes.com/sites/kalevleetaru/2017/01/01/fake-news-and-how-the-washington-post-rewrote-its-story-on-russian-hacking-of-the-power-grid/?sh=6bbae4e17ad5 | 'Fake News' And How The Washington Post Rewrote Its Story On Russian Hacking Of The Power Grid | 'Fake News' And How The Washington Post Rewrote Its Story On Russian Hacking Of The Power Grid
The control center of California's power grid in 2001. (John Decker / Bloomberg News)
On Friday the Washington Post sparked a wave of fear when it ran the breathless headline “Russian hackers penetrated U.S. electricity grid through a utility in Vermont, U.S. officials say.” The lead sentence offered “A code associated with the Russian hacking operation dubbed Grizzly Steppe by the Obama administration has been detected within the system of a Vermont utility, according to U.S. officials” and continued “While the Russians did not actively use the code to disrupt operations of the utility, according to officials who spoke on condition of anonymity in order to discuss a security matter, the penetration of the nation’s electrical grid is significant because it represents a potentially serious vulnerability.”
Yet, it turns out this narrative was false and as the chronology below will show, illustrates how effectively false and misleading news can ricochet through the global news echo chamber through the pages of top tier newspapers that fail to properly verify their facts.
The original article was posted online on the Washington Post's website at 7:55PM EST. Using the Internet Archive's Wayback Machine, we can see that sometime between 9:24PM and 10:06PM the Post updated the article to indicate that multiple computer systems at the utility had been breached ("computers" plural), but that further data was still being collected: “Officials said that it is unclear when the code entered the Vermont utility’s computers, and that an investigation will attempt to determine the timing and nature of the intrusion.” Several paragraphs of additional material were added between 8PM and 10PM, claiming and contextualizing the breach as part of a broader campaign of Russian hacking against the US, including the DNC and Podesta email breaches.
Despite the article ballooning from 8 to 18 paragraphs, the publication date of the article remained unchanged and no editorial note was appended, meaning that a reader being forwarded a link to the article would have no way of knowing the article they were seeing was in any way changed from the original version published 2 hours prior.
Yet, as the Post’s story ricocheted through the politically charged environment, other media outlets and technology experts began questioning the Post’s claims and the utility company itself finally issued a formal statement at 9:37PM EST, just an hour and a half after the Post's publication, pushing back on the Post’s claims: “We detected the malware in a single Burlington Electric Department laptop not connected to our organization’s grid systems. We took immediate action to isolate the laptop and alerted federal officials of this finding.”
From Russian hackers burrowed deep within the US electrical grid, ready to plunge the nation into darkness at the flip of a switch, an hour and a half later the story suddenly became that a single non-grid laptop had a piece of malware on it and that the laptop was not connected to the utility grid in any way.
However, it was not until almost a full hour after the utility’s official press release (at around 10:30PM EST) that the Post finally updated its article, changing the headline to the more muted “Russian operation hacked a Vermont utility, showing risk to U.S. electrical grid security, officials say” and changed the body of the article to note “Burlington Electric said in a statement that the company detected a malware code used in the Grizzly Steppe operation in a laptop that was not connected to the organization’s grid systems. The firm said it took immediate action to isolate the laptop and alert federal authorities.” Yet, other parts of the article, including a later sentence claiming that multiple computers at the utility had been breached, remained intact.
The following morning, nearly 11 hours after changing the headline and rewriting the article to indicate that the grid itself was never breached and the “hack” was only an isolated laptop with malware, the Post still had not appended any kind of editorial note to indicate that it had significantly changed the focus of the article.
This is significant, as one driving force of fake news is that as much of 60% of the links shared on social media are shared based on the title alone, with the sharer not actually reading the article itself. Thus, the title assigned to an article becomes the story itself and the Post’s incorrect title meant that the story that spread virally through the national echo chamber was that the Russians had hacked into the US power grid.
Only after numerous outlets called out the Post’s changes did the newspaper finally append an editorial note at the very bottom of the article more than half a day later saying “An earlier version of this story incorrectly said that Russian hackers had penetrated the U.S. electric grid. Authorities say there is no indication of that so far. The computer at Burlington Electric that was hacked was not attached to the grid.”
Yet, even this correction is not a true reflection of public facts as known. The utility indicated only that a laptop was found to contain malware that has previously been associated with Russian hackers. As many pointed out, the malware in question is actually available for purchase online, meaning anyone could have used it and its mere presence is not a guarantee of Russian government involvement. Moreover, a malware infection can come from many sources, including visiting malicious websites and thus the mere presence of malware on a laptop computer does not necessarily indicate that Russian government hackers launched a coordinated hacking campaign to penetrate that machine - the infection could have come from something as simple as an employee visiting an infected website on a work computer.
Moreover, just as with the Santa Claus and the dying child story, the Post story went viral and was widely reshared, leading to embarrassing situations like CNBC tweeting out the story and then having to go back and retract the story.
Particularly fascinating that the original Post story mentioned that there were only two major power utilities in Vermont and that Burlington Electric was one of them, meaning it would have been easy to call both companies for comment. However, while the article mentions contacting DHS for comment, there is no mention of any kind that the Post reached out to either of the two utilities for comment. Given that Burlington issued its formal statement denying the Post’s claims just an hour and a half later, this would suggest that had the Post reached out to the company it likely could have corrected its story prior to publication.
When I reached out to Kris Coratti, Vice President of Communications and Events for the Washington Post for comment, she responded that regarding the headline change, “Headlines aren’t written by story authors. When editors realized it overreached, as happens from time to time with headlines, it was corrected.” She also indicated that posting the editor’s note at the bottom of the article instead of the top was a mistake and indeed this was corrected shortly after my email to her inquiring about it.
Ms. Coratti’s response regarding the article headline is a fascinating reminder of just how many different people and processes combine to produce a single article in a newspaper – that contrary to popular belief, a reporter doesn’t sit down and write a story, choose a headline and then hit “Publish” and have the story go live on the newspaper website. Most newspapers, like the Washington Post, either employ dedicated headline writers or have their editors write the headlines for each piece and articles typically go through an elaborate review process designed to catch these sorts of issues prior to publication.
It is also interesting to note that the Post said it was an error for the editorial note to be buried at the very bottom of the page instead of at the top of the article, as was done for the Santa Claus story. This reflects the chaotic nature of newsrooms in which an editorial note is frequently added by an editor simply logging into a CMS portal and updating a live page, rather than a templated system which automatically places all editorial notes in the same place with the same style and formatting to ensure consistency.
Equally fascinating, neither Ms. Coratti nor Post Public Relations responded to any of my remaining queries regarding the article’s fact checking process. In particular, the Post did not respond when I asked how headlines are fact checked and if headline writers conduct any form of fact checking to ensure their summarized version is consistent with known facts. The Post also did not respond to a request for comment on why it took nearly half a day from the time the article was rewritten until an editorial note was finally appended acknowledging that the conclusions of the original article were false and that the article had been substantively rewritten to support a different conclusion, nor did the Post comment on why the editor’s note was originally placed at the bottom of the article and only moved after I inquired about its location.
Yet, perhaps most intriguing is that, as with the Santa Claus story, the Post did not respond to repeated requests for comment regarding how it conducts fact checking for its stories. This marks twice in a row that the Post has chosen not to respond in any fashion to my requests for more detail on its fact checking processes. Given the present atmosphere in which trust in media is in freefall and mainstream outlets like the Post are positioning themselves as the answer to “fake news” it certainly does not advance trust in the media when a newspaper will not even provide the most cursory of insight into how it checks its facts.
As with the Santa Claus story, the Post appears to have run this story without even attempting to perform the most basic of fact checks before publication. The original story noted that there were only two utilities in Vermont and yet the article states that the Post only attempted to contact DHS and does not mention any attempt to contact either of the utilities. Standard journalistic practice would have required that the Post mention that it attempted to reach either utility even if neither responded. The Post did not respond to a request for comment when I asked if it had attempted to reach either utility for comment prior to publication.
Putting this all together, what can we learn from this? The first is that, as with the Santa Claus and PropOrNot stories, the journalism world tends to rely far more on trust than fact checking. When one news outlet runs a story, the rest of the journalism world tends to follow suit, each writing their own version of the story without ever going back to the original sources for verification. In short – once a story enters the journalism world it spreads without further restraint as each outlet assumes that the one before performed the necessary fact checking.
The second is that the news media is overly dependent on government sources. Glenn Greenwald raises the fantastic point that journalists must be more cautious in treating the word of governments as absolute truth. Indeed, a certain fraction of the world’s false and misleading news actually comes from the mouths of government spokespeople. Yet, in the Post’s case, it appears that a government source tipped off the post about a sensational story of Russians hacking the US power grid and instead of reaching out to the utilities themselves or gathering further detail, the Post simply published the story as fed to them by the government officials.
The third is that breaking news is a source of a tremendous amount of false and misleading news as rumors and falsehoods spread like wildfire in the absence of additional information. Top tier newspapers like the Washington Post are supposed to be a bulwark against these falsehoods, by not publishing anything until it has been thoroughly fact checked against multiple sources. Yet, it appears this is not the case – in the rush to be the first to break a story and not be scooped, reporters even at the nation’s most prestigious news outlets will take shortcuts and rush a story out the door. What would have happened in the Post had waited another day or two to collect responses from all involved, including Burlington Electric? It would have avoided publishing false information, but it also likely would have been scooped by another newspaper who wanted to be the first to break the story.
Indeed, “breaking news” is a tremendous problem for mainstream outlets in which they frequently end up propagating “fake news” in their rush to be the first to break a story. In a world beset by false and misleading news, do top tier news outlets have a professional responsibility to step back from breaking stories and only report on them after all details are known and they have had an opportunity to speak with all parties involved and understand more definitively what has happened? Financially this would likely be devastating in a share-first click-first world in which to the victor go the advertising dollars, but it would seem the only way to truly stop “fake news” from spreading.
|
2734807506be03395350f1e053b93617 | https://www.forbes.com/sites/kalevleetaru/2017/01/02/how-the-washington-posts-defense-of-its-russian-hacking-story-unraveled-through-web-archiving/ | How The Washington Post's Defense Of Its Russian Hacking Story Unraveled Through Web Archiving | How The Washington Post's Defense Of Its Russian Hacking Story Unraveled Through Web Archiving
The control room of a power plant in Nebraska. (AP Photo/Josh Funk)
As the Washington Post’s story of Russian hackers burrowed deep within the US electrical grid, ready to plunge the nation into darkness at the flip of a switch unraveled into the story of a single non-grid-connected laptop with a piece of malware on it, the Post has faced fierce criticism over how it fact checked and verified the details of its story. It turns out that the Post not only did not fact check the story until after it was published live on its website, but in its defense of the story, the Post made a number of false statements about what was written when, which the Internet Archive’s Wayback Machine reveals.
When I wrote yesterday about the Washington Post story, Kris Coratti, Vice President of Communications and Events for the Washington Post had offered just a single emailed response and had not responded to any of my remaining questions regarding the Post's fact checking and construction of the article in question. Last night, just over 20 hours later, she finally did respond to two of my questions.
As I noted yesterday, it seemed odd that Burlington Electric issued a formal response refuting the Post’s claims just an hour and a half after the Post’s publication. This would suggest that the Post would have gotten a response from Burlington if only it had just contacted the utility prior to publication, as is required by standard journalistic practice.
In fact, when I asked the Post why it had not contacted the utilities prior to publication, in her emailed response to me, Ms. Coratti asserted that the Post had indeed contacted both utilities for comment prior to publication and had not received a reply from either and so proceeded with publication. In fact, she went as far as to state “we had contacted the state’s two major power suppliers, as these sentences from the first version of the story attest: ‘It is unclear which utility reported the incident. Officials from two major Vermont utilities, Green Mountain Power and Burlington Electric, could not be immediately reached for comment Friday.’"
If this statement was present in the very first version of the story published at 7:55PM, that would mean that the Post had reached out to the companies for comment prior to publication and received no response.
However, as the Internet Archive’s Wayback Machine shows, this is actually false. Archived snapshots of the story at 8:16PM and 8:46PM make no claims about having contacted either utility and state instead only that “While it is unclear which utility reported the incident, there are just two major utilities in Vermont, Green Mountain Power and Burlington Electric.” No claim is made anywhere in the article about the Post having contacted the utilities for comment.
In fact, it was not until an hour after publication, somewhere between 8:47PM and 9:24PM that the Post finally updated its story to include the statement above that it had contacted the two utilities for comment.
I reached out to Mike Kanarick, Director of Customer Care, Community Engagement and Communications for Burlington Electric Department for comment on why his company had not responded to the Post’s prepublication request for comment.
It turns out that the reason that Burlington Electric did not response to the Post’s prepublication request for comment is that the Post actually did not reach out for comment until after it had already run its story. The Post’s article went live on its website at 7:55PM EST, but according to Mr. Kanarick, the first contact from the Post was a phone call from reporter Adam Entous at 8:05PM, 10 minutes after the Post's story had been published.
It is simply astounding that any newspaper, let alone one of the Post’s reputation and stature, would run a story and then ten minutes after publication, turn around and finally ask the central focus of the article for comment. Not only does this violate every professional norm and standard of journalistic practice, but it feeds directly into the public's growing distrust of media. In the era of “fake news” hysteria where publications like the Washington Post tout their extensive fact checking and vetting workflows as reasons that the public should trust their reporting over anyone else, it is surprising to see just how chaotic or non-existent that fact checking really is.
What exactly is “fact checking” when a newspaper runs a story and only calls the party involved after publication for comment on the published and live story that is already circulating widely? That suggests that the Post’s idea of fact checking is to publish first and then correct the story by rewriting it bit by bit in the hours following publication, rather than collecting all facts and developing a definitive hard story before ever allowing it to be published. While both models might be called “fact checking,” the latter is what leads to false and misleading news circulating, especially as other news outlets picked up on the Post’s story and ran it assuming that the Post had conducted all of the necessary fact checking.
It also tells us that the Post ran its story based solely and exclusively on the word of US Government sources that it placed absolute trust in. That the Post would run an entire story based exclusively on the word of its US Government sources and without any other external fact checking (such as contacting the two utilities), offers a fascinating glimpse into just how much blind trust American newspapers place in Government sources, to repeat their claims verbatim without the slightest bit of vetting or confirmation.
Moreover, Ms. Coratti’s response to me also asserted that “as soon as Burlington Electric released its statement … we modified the story to remove assertions that the electric grid had been penetrated and later added the editor's note.” Yet, as I noted in my response back to her (to which she has not responded), more than a full hour elapsed between Burlington’s press release and the Post finally updating its story. While a one hour response time might have been considered lightning fast and nearly instantaneous in a former era, in the world of social media in which stories spread in seconds, a delay of an entire hour in updating a story with critical facts that change the entire focus of the story and essentially amount to a retraction of the original narrative, represents an eternity during which the false original narrative continued to spread. Ms. Coratti also did not respond to a request for comment on why the Post took more than 11 hours to post an editor’s note notifying readers that the article had been substantively rewritten and the original thesis retracted.
It was also fascinating that the Post itself does not appear to closely track the changes it makes to stories, with Ms. Coratti writing with respect to the article title that “we repeatedly modify and refine headlines as we publish a story on multiple platforms; we do not keep track of such changes.”
It is both fascinating and troubling that the Post’s defense of its reporting in this case involved asserting that it had contacted the utility in question prior to publication, that it had included a statement attesting to this in the very first version of the article and that it immediately updated its story as soon as the utility issued a formal statement. Yet, all three of these statements appear to be false.
Ms. Coratti did not respond to a request for comment on the fact that her responses would appear to suggest that the Post itself is confused as to what it wrote, when and in what version of the article, though her earlier response about article headlines suggests the Post does not version its articles to record what they say and when. In an era in which any WordPress blogger has automatic strict versioning recording every edit they make to their posts through time, it is all the more surprising and shocking that the Post would not do the same.
Putting this all together, we see that the “fact checking” of mainstream journalism does not quite match the gleaming pristine aura touted by the journalism community in which top tier outlets are a bulwark against false and misleading news due to their rigorous and extensive fact checking processes that will not allow an article to be published until every detail has been fully confirmed. Instead, even the most celebrated outlets like the Post will run a story without the most basic of fact checking or, in this case, appear to have done their fact checking only after publication, allowing a false narrative to ricochet virally through both social and mainstream mediums for hours before correction.
Moreover, it was only through the incredible Internet Archive Wayback Machine, which saves snapshots of more than 279 billion webpages and stretches back more than 21 years to the very dawn of the modern web, that we were able to reconstruct the chronology of this Washington Post article and show how the story evolved and when certain statements were added or removed. Without the Wayback Machine, we would have only the Post’s word as to what its article said when and, as Ms. Coratti noted, the Post itself does not version or archive the entirety of its stories to be able to go back and definitively examine what was said and when.
In the end, as we peel back the gleaming veneer, we see that the way in which mainstream journalists really work don’t always match our expectations or indeed the claims that the journalism community itself makes about the rigor of things like fact checking and verification.
Thus, as I’ve said again and again, the answer to “fake news” and the issue of false and misleading information in general is not to place a few elites in the role of ultimate arbitrator of truth, but rather to develop a citizenry that is data and information literate. We also see that in a world in which incredible organizations like the Internet Archive are preserving the world’s online news for posterity and documenting the editing and rewriting and airbrushing that that news undergoes, news outlets must be far more transparent in how they report on the world around us, as ordinary citizens can now go back and fact check the fact checkers.
|
39dd82e2be86046c4856042e431ec886 | https://www.forbes.com/sites/kalevleetaru/2017/01/15/what-if-facebook-and-twitter-made-you-read-an-article-before-you-could-share-it/ | What If Facebook And Twitter Made You Read An Article Before You Could Share It? | What If Facebook And Twitter Made You Read An Article Before You Could Share It?
A Belgian pigeon, similar to the prized Belgian carrier pigeons that can be worth hundreds of... [+] thousands of euros. (SISKA GREMMELPREZ/AFP/Getty Images)
One of the most fascinating statistics about how we consume and share news online revolves around how few of us actually read the news articles we share – we see an interesting headline and click the “share” button to blast it out to all our friends and followers without ever reading further. In fact, upwards of 60% of links shared on social media were posted without the sharer reading the article first. Even when someone goes to the heroic and unusual lengths of actually reading an article before sharing it, they rarely make it beyond the first few paragraphs. These behaviors encourage click bait headlines and feed into the “fake news” epidemic. This raises the fascinating question – what if Facebook and Twitter forced you to read through an entire article before you were allowed to share it with others?
Many news outlets today use JavaScript-powered beacons embedded in their articles to track how far readers scroll through each article, the time they spend on each section of the article and other micro-level assessments of engagement. Today all of that data is typically just fed back into a statistics portal and used for ad marketing, but the same tools could easily be turned around to assess whether a reader A) never read the article at all, B) skimmed just the lead paragraph quickly, C) skimmed the first half of the article quickly, D) scrolled quickly down the full length of the article, but scrolled too fast to really take in any of the details, E) scrolled quickly, but paused several times to read sections in more detail, F) read the entire article in detail or G) some combination of the above.
In fact, Facebook already compiles several of these metrics as a way to fight clickbait, launching a number of algorithm changes over the last few years that take into account how long users spend reading articles they’ve clicked on through Facebook posts or those posts themselves. The company quietly uses these indicators to deprioritize shares and posts that most readers skip over or skim very quickly. In doing so, they allow the posts to be shared without any delays and display no visible indicators to users about the post, but internally tweak the post’s settings so that it will be less and less likely to appear in other users’ news feeds.
What if Facebook displayed these readership metrics as an actual visible “score” in the upper-right of each post or share that advertised to the world whether people viewing that post or clicking that link either A) immediately moved onward after reading just a few sentences (receiving a score of “red”), B) skimmed the top half of the article and returned (orange), C) skimmed all the way to the bottom and paused a few times to read portions in more detail (yellow) or D) read all the way to the bottom at a pace suggestive of the user actually spending the time to fully digest the piece (green). Facebook is already recording this data, so why not display it to end users?
In fact, in its proposal to combat “fake news” Facebook has proposed offering precisely such public indicators on news and other shared links to reflect that one of its fact checking organizations has disputed the contents of the article.
Thus, Facebook is tracking how much time users spend reading each post/link and the technology is or shortly will be in place to assign and display “truth” scores for each post/article, meaning Facebook has all of the pieces in place to assign a public “engagement” score for each piece of content on its platform and to publicly display that score. When one of your friends shares a link to a news article, that share could be made to display an indicator showing how much time everyone across Facebook is spending with that article and a second indicator showing how much time people in your social group are spending engaging with it and how many people in your network have read it already.
Combining these three metrics – global engagement, your social circle’s engagement and penetration into your social circle – offers a powerful set of signals as to an article’s contentiousness and relevance. If an article is trending widely globally, but has not penetrated your social circle despite a number of your contacts having read the article, that suggests that something about that article is making it be rejected or of no interest to your circle. Conversely, something which is going viral within your friend group, but the rest of world appears to have little interest in could indicate that it is false or misleading or that it reflects something of niche interest to your community (for example a new feature in your favorite niche PC game).
Combining these indicators further, imagine that you spot an attention-grabbing headline in your news feed and you click on the share button to share it with all of your friends. Instead of instantly sharing with the world, a popup appears and tells you that most of the people sharing this article have not read it and that those who do spend a few moments to read through it abandon it before reading beyond the first paragraph or do not share it after reading it. That would likely give you pause and perhaps make you spend a few moments to at least skim the article to make sure it is what you thought it was.
Alternatively, imagine if Facebook required that you answer a short quiz about an article before you were allowed to share it and you were not allowed to share the link until you got the quiz correct? That would immensely reduce blind sharing and ensure that users spend the time to read and comprehend at a basic level the content they are sharing.
Yet, “frictionless sharing” is what the social web is built upon – making it easier and easier for anyone, anywhere to tell the world whatever is on their mind at the moment with a single click. Anything that impinges on this streamlined process and makes it even the slightest bit more difficult to share would have a substantial impact on the financial bottom line of the ad-driven social media world in which even a share of a “fake news” story generates revenue for them.
Putting this all together, what would we gain if users had to prove they had read an article before they were allowed to share it? In terms of “fake news” (false and misleading coverage) it is unclear whether such a process would really combat the spread of this news, since typically such coverage is written such that even if someone read the entire article they might still share it. Clickbait would likely be substantially impacted, since Facebook’s current approach of merely limiting the spread of such stories doesn’t address the root problem of allowing it to be posted in the first place. Perhaps, then, the biggest benefit would be forcing the world’s online citizenry to become more information literate, to read and think about the information they consume before blindly sharing it with the planet and force us all to spend a bit more time thinking about what we read online and a bit less time acting as illiterate carrier pigeons.
|
09bb22507215e0a91b8a24a6c0593bec | https://www.forbes.com/sites/kalevleetaru/2017/02/04/fighting-social-media-hate-speech-with-ai-powered-bots/ | Fighting Social Media Hate Speech With AI-Powered Bots | Fighting Social Media Hate Speech With AI-Powered Bots
Could AI-powered bots fight hate speech by flooding the internet with love? (GERARD JULIEN/AFP/Getty... [+] Images)
As social media platforms have become ever more intrinsic to how we live our lives and begun to evolve into the primary medium through which we communicate and listen to the rest of the world, their rise has handed a megaphone to the world’s hate and vitriol. In fact, it was Twitter who initially stepped forward to staunchly defend the rights of terrorists and their sympathizers to communicate via its platform before abruptly reversing itself in the face of fierce public criticism. Yet, despite myriad programs and policies designed on paper to fight abuse, in reality the platforms have done very little to curb the spread of hate speech, harassment and violent threats. This raises the question of whether the rise of deep learning-powered “bots” could offer a powerful solution to online hate speech, by deploying them en masse to report, counter and overwhelm hateful posts in realtime.
Over the last few years deep learning algorithms have made enormous advances in their ability to process human text and imagery at levels of sophistication and accuracy that approach human levels at times, while even simple ELIZA bots have managed to carry on fairly convincing chats for more than half a century.
While far from HAL 9000 levels of comprehension, the current state of the art in deep learning and heuristic-powered chat bots are quite capable of the kind of “linguistic legerdemain” (to use the words of Spock) required to identify simplistic overt threats of violence and hate speech that are readily found on social platforms like Twitter. Nuanced attacks on women’s rights would likely escape such algorithms, but simple calls promoting the beating of women or violence against public figures or using racial or charged epithets to denigrate ethic, religious or other minorities could be readily identified at quite high levels of accuracy.
This raises the question - what’s a bot to do when it finds hate speech online? One could imagine at the simplest level a small set of bots that simply scour social media platforms for any matching posts and automatically report them via the platforms’ native abuse reporting tools. Often companies like Twitter or Facebook respond to high profile cases of abusive behavior on their platforms by offering generic statements that any reported abuse is removed, but nearly always stop short of confirming or denying whether anyone actually reported the posts in question as abuse. Auto-reporting bots like these would not only ensure that every overt hate speech post is reported to the platforms, but it would uniquely offer an electronic evidence trail recording the precise timestamp when the post was reported so response times for different kinds of content can be measured. Does Twitter take down threats against some minorities more quickly than others? Does Facebook have a higher takedown rate for harassment of people in the news compared with those not being discussed at the moment?
By flooding the platforms with reports of every single hateful post and offering an evidence trail proving they were aware of the posts, such bots would at the very least force social media companies to acknowledge the scale and scope of hate speech on their platforms. By conducting follow up tracking of all reported posts to see which ones the companies take down and which they leave up, the companies’ internal abuse guidelines can also be precisely quantified.
At a minimum, this would force Twitter, Facebook and others to publicly codify their acceptable content guidelines in response to the community audits such data would enable. If Twitter publicly prohibits threats of violence, yet the data shows a single meme with thousands of reported overt threats of violence against women that the company declined to remove, it would face immense pressure to either step up its enforcement activity or clarify its policies on threats against women.
It would also force the companies to confront and codify cultural and religious exceptions to their policies. In some countries members of certain racial, ethnic, religious or other minorities have very limited rights and laws in those countries may permit what Americans might consider to be overt and clear hate speech or threats of violence. In the absence of robust quantitative data on how the companies handle such exceptions it is unclear how they reconcile differences in the concepts of acceptable speech globally. Offering a running post-by-post log of the companies’ responses to reported abuse would make this much clearer.
One could even imagine such bots publishing a daily list of all flagged abusive posts along with a daily leaderboard of the most egregious offenders. Such a list would likely violate the platforms’ provisions on data use and privacy, but if the companies began shutting down accounts posting these lists or turning to the courts to stop such lists, the backlash would likely be intense.
Such bots would be strictly passive in nature, recording suspected abuse and reporting it through the platforms’ existing abuse channels, but doing nothing further to counter it. Imagine if things went a step further and the bots were given authority to actually counter hate speech themselves – what might this look like?
Today it is relatively straightforward to buy access to armies of millions of fake social media accounts that can be used for everything from buying followers or likes to artificially elevating a topic by posting millions of positive or negative comments about someone or something. Instead of a small handful of bots scanning Twitter and flagging abusive posts, what if an army of millions of bots were given control over millions of Twitter accounts and given unlimited authority to counter all hate speech they encountered?
Under one model, the bots would coordinate with each other and when any of them find an account posting a hateful comment, that account would be sent a single response warning the owner that their comment could be deeply offensive and asking that in future they refrain from such posts. The user account would then be added to a list and future posts by the same account would result in referrals to the platform’s abuse line, but no further automated responses from the bots, or perhaps at most one response per day.
Yet, if someone tweets encouragement to beat women, it is unlikely that a single chastising tweet from an anonymous account is going to change their behavior. Countering such hate speech requires more than chastisement, it requires responses which encourage self-censorship. The Chinese government learned long ago that merely deleting posts does little to change behavior, but that if you instead flood the person with posts attacking them and publicly shame them they will likely think twice about future posts. You won’t change their viewpoints, but you will cause them to self-censor and no longer overtly share those views with others.
Imagine the Chinese censorship model replicated using an army of AI-powered bots. Someone posts a tweet that encourages unprovoked violent attacks against a minority group for their religious beliefs. A few thousand of the automated bots flag the post to Twitter and then begin a relentless campaign of counter posts criticizing the poster, flooding his or her Twitter account in the process and likely causing many of the account’s followers to unfollow it to avoid the flood of incoming posts. The bots might also scan the account’s entire posting history and identify other accounts it frequently corresponds with, retweeting its responses to those accounts. At sufficiently high volume, this bot army would bury hate speech posts in a flood of anti-hate-speech discussion and toxify hate-speech-posting accounts to the point that they may lose many of their followers who are fleeing the barrage of posts.
Twitter would almost certainly begin blocking these bot accounts as quickly as it could, but the ease of registering new accounts means it would be relatively trivial to stay ahead of those bans. The ensuing media coverage and public dialog would place the platforms under intense pressure to finally devote real resources to combating hate speech. Moreover, the tens or even hundreds of millions of daily posts from these bots would render the platforms almost unusable, leaving them no choice but to adopt technological measures both to combat hate speech and to address the issue of robotic accounts. Yet, even if robotic accounts were finally successfully eliminated, the success of such a bot army would likely lead to a human volunteer army to replace it.
In fact, perhaps the most surprising story here is that no-one has actually done this at scale. From counterterrorism to counterfeiting, human trafficking to hate speech, illegal activity to threats of violence, any issue imaginable could be combated through such bot warfare.
Of course, the opposite is also true – once bot warfare is used to fight hate speech online, it is entirely likely that those who promote hate speech would return with their own bot armies to promote it. In many ways the Islamic State has proven the success of this model, using a human-based equivalent that leverages its army of global supporters to post content from a myriad accounts and moving from account to account as they are shut down, illustrating how hard it is to fight such networks. But, elevating the world of online censorship from humans to bots would profoundly reshape the landscape of free speech in that the shear scalability of bots means a bot army could instantly overwhelm organic human discourse, much as automated trading has begun to overwhelm human influence in the financial markets.
One could imagine that governments like China and Russia are already investing heavily in experimenting with such "bot idea armies” and deploying prototypes to augment their vast human propaganda armies.
Indeed, it is an interesting commentary on Silicon Valley's focus that Facebook’s founder Mark Zuckerberg has focused more public attention on his devotion of 100 hours of his time to making a robotic butler that can make him toast than on spending those 100 hours personally building a chatbot to fight hate speech online.
When I reached out to Twitter and Facebook for comment, Facebook did not respond by publication time while Twitter responded with a link to its “automation rules and best practices” guide, but said it had no comment on the specific applications outlined here and whether they would be deemed in violation of those practices or might be permitted in limited fashion to help the platform combat hate speech.
In the end, as the simplistic heuristic-based chatbots of the past half century give way to sophisticated deep learning powered algorithms capable of intricately emulating human conversation, it is only a matter of time until we see those chatbots weaponized and deployed as “free speech” and “counter speech” armies that will forever reshape the online world.
|
888209e6d3061794fcc80817ebdc2661 | https://www.forbes.com/sites/kalevleetaru/2017/02/24/are-web-archives-failing-the-modern-web-video-social-media-dynamic-pages-and-the-mobile-web/ | Are Web Archives Failing The Modern Web: Video, Social Media, Dynamic Pages and The Mobile Web | Are Web Archives Failing The Modern Web: Video, Social Media, Dynamic Pages and The Mobile Web
Basilica Cistern as an example of engineering falling into decay (Shutterstock)
The Internet Archive is perhaps the most famous archive of the World Wide Web's evolution and history, founded 20 years ago by Brewster Kahle when he realized that the web, despite its pivotal role in reshaping how human society accessed and engaged with the world around us, was by its nature ephemeral and being lost with each passing day. He founded the Archive to preserve the early Internet over its formative years, acting as the internet equivalent of a library archive, accepting donations of crawling data and performing its own crawls and amassing all of this into a single digital catalog. The success of the Archive helped spur a myriad web archiving initiatives across the globe today, focused on everything from national culture to scientific data.
However, in just its 24 brief years, the modern web has evolved with breathtaking speed from a simple (largely textual) platform for sharing scientific research into a rich multimedia and increasingly intelligent network that seeks to connect every person on earth. This state of constant change has resulted in an Internet that has largely run ahead of much of the web archiving community, meaning that our archives are preserving less and less of the Internet even as that Internet powers more and more of the world around us.
For the most part, a web archiving crawler built 20 years ago could still largely function today, downloading a web page, extracting its links, crawling each of those links, extracting their links in turn, crawling those links and so on, while recording each page's HTML and images into its archives. Styling like CSS might be missed by those early crawlers, though well-designed ones built for arbitrary resource identification would still function today, albeit not as efficiently.
The problem is that the web is no longer built upon the simple premise of a collection of small static HTML and image files served up with a simple tag structure and readily parsed with a few lines of code. Today’s web is richly dynamic, multimedia and increasingly broken into walled gardens and device-specific parallel webs.
In particular, the web has been developing along four key evolutionary paths that have proved particularly problematic for the archiving community to preserve: multimedia, social media, dynamic content and the mobile web.
The web of today is a far cry from the web of 1995 when I launched my first web startup, when web pages were essentially like book pages – for the most part piles of text with an odd image thrown in here and there for illustrative effect. Today the web is all about streaming video and audio. Even 4K videos are beginning to increase in numbers on YouTube and other streaming sites. Multimedia is difficult to archive not only because of its size (its quite easy to accumulate a few petabytes of HD video without much difficulty), but also because most streaming video sites don’t make it easy to download the original source files. While numerous utilities exist that are able to reverse the streaming protocols used by major video hosting sites, the sites themselves rarely offer officially sanctioned APIs for bulk downloading large volumes of their content as raw video source files. In our device-centric world in which we watch videos from large-format televisions, ultra resolution desktops, low resolution phones, etc it is also important to recognize that streaming sites typically offer multiple versions of a video in different resolutions and compression levels that can result in dramatically different viewing experiences. The majority of video archiving solutions today focus on just the default version of a video or the highest resolution version, rather than attempting to archive all editions of a stream. Some platforms also go to extended lengths to try and prevent unauthorized downloading of their content via special encodings, encryption and other protections.
Archiving streaming video is not an unsolved problem and widely used tools like the Internet Archive’s Archive-IT system actually include support for platforms like YouTube right out of the box. Yet, support for streaming video is actually fairly rare among the myriad web archiving projects underway today – the vast majority of them are unable to properly download and archive a YouTube video and preserve it for posterity as part of their core crawling activity.
Social media offers perhaps the most intractable challenge to web archiving by virtue of the walled gardens being erected by the major social platforms. While Twitter has long offered a firehose of all of its public tweets, which is in fact archived by the Library of Congress, Facebook and many other platforms do not offer commercial data firehoses that archivers can simply plug into. Moreover, outside of Twitter nearly all major social platforms are moving towards extensive privacy settings and default settings that encourage posts to be shared only with friends. The trend today is no longer to broadcast one’s every waking moment to the world, but rather to share intimate thoughts with friends and family. This means that even if companies like Facebook decided to make available a commercial data stream of all public content across the entire platform, the stream would capture only a minuscule fraction of the daily life of the platform’s 2 billion users.
From a web archival standpoint, the major social media platforms are largely inaccessible for archiving. While tools exist to assist in bulk exporting posts from Facebook, the platform continually adapts its technical countermeasures and has utilized legal threats in the past to discourage bulk downloading and distribution of user data. Shifting social norms around privacy mean that regardless of technological or legal countermeasures, users are increasingly walling off their data and making it unavailable for the public access needed to archive it. In short, as social media platforms wall off the Internet, their new private parallel Internets cannot be preserved, even as society is increasingly relying on those new walled gardens to carry out daily life.
The dynamic web poses unique challenges to the simplistic crawlers used for many web archiving projects. For example, the CNN homepage uses JavaScript to render the majority of the page. A simplistic crawler that simply fetches a page and parses its static HTML as-is will fail to download or preserve the majority of the page. Indeed, after CNN introduced its first iteration of its dynamic homepage in April 2015 a number of web archives ceased preserving anything other than the above-the-fold headlines – the rest of the homepage simply ceased to exist. When CNN rolled out another update to its homepage sometime in November 2016, some web archives simply began displaying a blank page for all snapshots over the last four months.
This is because many web archiving projects today use crawlers built for the web of a quarter century ago, rather than the web of today. Many of the archival crawlers I’ve seen are extraordinarily simplistic, lacking any of the efficiency and stability enhancements standard on today’s commercial production crawlers. Many are simply cobbled together Python scripts or Java applications that are less robust than crawlers I wrote 23 years ago. Many archival crawlers expect static HTML pages where the entirety of the page is contained in a single HTML response that can be processed as-is in isolation. Few incorporate refinements like JavaScript execution engines (such as Google’s V8 engine) or full page rendering and DOM crawling and thus have no possibility of rendering modern dynamic pages.
In contrast, Google’s own crawlers appear to have supported basic JavaScript rendering at least as early as 2011 and by 2015 they appear to have been fully rendering dynamically generated content via JavaScript inside of the crawler and compiling the indexed version of each page via DOM traversal. This means Google’s crawlers “see” pages the same way a modern web browser does and therefore have no issues with dynamic content like the CNN homepage.
Building Google-style dynamic crawlers with inbuilt JavaScript support is actually not that difficult, especially with the availability of Google’s V8 engine and JavaScript-first environments like Node.js. Scaling such crawlers to crawl the open web and process billions of pages efficiently is a different matter, but hybrid approaches such as using scout crawlers to identify sites using dynamic rendering or filters designed to identify dynamic pages and recrawling those pages using a V8-powered crawler can act as a useful bridge.
No matter what approach they choose, the simple fact of the matter is that the era of using traditional static HTML web crawlers for archival work has ended. A web archive that uses crawlers that cannot render dynamic JavaScript-powered web pages simply cannot robustly access and preserve the increasingly dynamic web.
Finally, if we think of the inability to index dynamic content as preventing web archives from preserving the dynamic web, then the failure of many web archives to consider mobile content is preventing web archives from preserving the mobile web. Over the last few years Internet users have increasingly turned to mobile devices from cellphones to tablets to access the Internet. From early mobile-optimized sites to today’s mobile-first world, the Internet of today is gradually leaving its desktop roots behind. Google has been a powerful force behind this transition, penalizing sites that do not offer mobile versions.
Yet, many of the web archives I’ve perused fail to robustly index this parallel web. Few actively scan pages for tags indicating the availability of AMP or mobile editions and automatically crawl and index those. Even those that do look for AMP pages do not always switch to a mobile user agent and mobile emulation to fetch those editions. An increasing number of servers scan the user agent field and deny access to the mobile edition of a page unless the client is an actual mobile device, meaning an ordinary crawler requesting a mobile page, but using its standard desktop user agent tag will simply be redirected to the desktop version of the page. Some sites go even further, returning versions of the site tailored for tablets versus smartphones and even targeting specific devices for truly customized user experiences, requiring multiple device emulation to fully preserve a page in all its forms.
Adding mobile web support to web archives is fairly trivial, but it is remarkable how few archives have implemented complete robust mobile support. Even those that offer basic mobile crawling support rarely crawl all versions of a page to test for how differences in device and screen capabilities affect the returned content and the level of dynamic customization in use.
Putting this all together, the incredible vision of preserving the web for future generations led to myriad projects today tasked with crawling and saving copies of the world’s websites. However, with a few exceptions the web archiving community is still stuck in a quarter-century-old mindset of how the web works and has largely failed to adapt to the rapidly evolving world of video, social media walled gardens, dynamic page generation and the mobile web. Some of these have no easy answers while others are trivial to address, but both suggest that greater collaboration is needed between the archiving community and the broader technology industry, especially companies that build the state-of-the-art crawling infrastructures that power modern web services. In the end, to truly preserve today’s web requires a lot more than the simple crawlers that sufficed 24 years ago.
|
1d3b6c23343ec1a9b497883956ff057f | https://www.forbes.com/sites/kalevleetaru/2017/03/27/what-research-libraries-and-web-archives-could-learn-from-the-commercial-cloud/ | What Research Libraries And Web Archives Could Learn From The Commercial Cloud | What Research Libraries And Web Archives Could Learn From The Commercial Cloud
The main room of a 300 year old library. (BENOIT DOPPAGNE/AFP/Getty Images)
In 2014 I optimistically wrote for the Knight Foundation blog that libraries could reinvent themselves in the digital era, tracing my own collaborations with the Internet Archive over the prior year and drawing from my opening keynote address to the 2012 IIPC General Assembly at the Library of Congress. Yet, reflecting back three years later, looking at just how adrift and leaderless so many research libraries have become in the digital era, unsure of how to reinvent themselves and often too arrogant and insular to reach out beyond the communities they have worked with for centuries, I am no longer so certain that research libraries and the academic communities that work most closely with them can genuinely reimagine themselves on their own. Community libraries have found great success reinventing themselves to better fit into modern lifestyles, from collaborative spaces to free wifi to ebooks to and even 3D printers and virtual reality systems, but research libraries as a whole seem to be struggling to find their footing in the digital era. What might they learn from the world of the commercial cloud and indeed the broader technological future of Silicon Valley?
The commercial cloud has truly transformed how we think about computing in the modern era, from the shift from hardware to services and experts, the rise of seamless security and unimaginable deep learning systems accessible by a single API call. Moreover, the scale that companies like Google operate at dwarfs even the largest collections in the library world. When I talk with many in the library community, they point to numbers like the Internet Archive’s 20 petabytes of holdings as evidence that the library world has some of the largest datasets in existence. Yet, in Google’s world, a couple of engineers could borrow a 50 petabyte cluster to run a quick sorting experiment half a decade ago and today anyone can table scan an entire petabyte in just 3.7 minutes. With one line of SQL you could analyze the entire holdings of the Internet Archive in just under an hour and 14 minutes without having to write a line of programming code or manage a single file.
This immense technical capacity is increasingly being delivered as user-friendly API building blocks that can be rapidly plugged together to build global-scale applications of incredible capability and complexity with just a few lines of code. In my own open data GDELT Project, I’ve long looked for a way that my web crawlers could live stream the unusual files they encounter each day (such as images that crash a major image library or which work with one library but fail in another) back to a central analysis server for automated diagnosis. I didn’t want to add a lot of complexity to my crawlers or stand up and manage a large cluster that could absorb a sustained stream of gigabytes per second of data. Building any kind of live stream absorption system requires a lot of architectural design to ensure it is robust and to deal with issues like automatic scaling and capacity management.
That’s when I came across Google’s Cloud Storage Streaming Transfers service, which allows one to live stream effectively infinitely into Google’s storage fabric and access the results as standard files. With the addition of just three lines of code to my crawlers (one to open the stream, one to write to the stream and one to close the stream) suddenly I was live streaming all of the data I needed with all of the scaling and capacity planning being handled by Google itself, rather than being something I needed to manage. As the number of crawlers increased, the system simply scaled linearly and transparently.
In my case I’m using this capability as essentially a managed infinitely-sized working buffer, but since Streaming Transfers are handled the same as any other file on GCS, the resulting streamed files can be configured as Multi Regional files and made as robust and durable as Google is capable of, mirroring those files across multiple physical data centers throughout the world.
Why is this relevant to web archiving? In short, because those three lines of code effectively transformed those crawlers into a miniature Internet Archive. Instead of discarding all of the files my crawlers encounter, the addition of those three lines of code allow those crawlers to live stream files of interest back to Google’s global storage infrastructure. To make my own Internet Archive all I’d have to do is tell those crawlers to stream all files back instead of just some files and to configure my GCS storage to retain the files infinitely and copy across regions instead of using it as a scalable cloud scratch disk.
In fact, if all you wanted to do was crawl the open web and archive everything you encountered (literally making your own personal Internet Archive), you could pretty much build a web-scale crawling and archival infrastructure by just connecting Google’s existing cloud building blocks like Cloud Pub/Sub, Cloud Datastore and Stackdriver Monitoring, data mine the ingested content with Cloud Vision and Cloud Natural Language and store all of the results to Cloud Storage, all with the knowledge that your system can transparently scale to the tens or even hundreds of petabytes without having to change a line of code.
In short, with very little code, you can build a system capable of open web archiving just by connecting these prebuilt cloud building blocks together. Compare this with hiring a huge development team, building and managing your own massive hardware and software cluster infrastructure and trying to build tools that can scale to even a small fraction of the web. After all, it’s hard to beat a cloud environment that cost more than $30 billion to build.
Even for those web archives that have massive legacy data centers they want to continue to maintain for the time being, there are still so many lessons they can learn from the building block and services mentality of the commercial cloud, especially around the areas of automatic scaling, reliability, robustness, security and developer workflows.
Of course, not every research library conducts web archiving and the lessons here apply equally well to any kind of research workflows.
Want to OCR scanned books and other materials in 56 languages for as low as 15 to 30 cents a book with accuracy on languages like Arabic that approaches human quality in many cases? There’s no hardware to maintain, no massive OCR clusters to build and no worrying about scaling and capacity – just plow the images through a cloud API as fast as you generate them and the get back searchable OCR’d text. In short, the modern cloud has moved beyond renting hardware to providing a myriad prebuilt service building blocks that you can just plug together to instantly build any application imaginable and leave the management to the cloud vendor.
Of course, once libraries and archives acquire all of this content, what can they do with it?
As I pointed out in my 2015 NFAIS keynote address, it has been just over half a century since Dialog introduced us to keyword searching online text archives and 50 years later we still access the modern world through the keyword. It is astounding to think that with the computing revolution and information explosion over the past half century, from landing on the moon to the birth of the web, the mechanism through which we access all of this information has never moved beyond the humble keyword.
This is where the rise of cloud-based deep learning APIs are poised to have an immense impact on research libraries. You can catalog hundreds of millions of photographs by their contents, transform a multi-petabyte video archive into a fully searchable database, generate transcripts of speeches and even translate them into multiple languages or interrogate vast text archives in complex ways. All of this possible through simple remote cloud APIs and requiring no knowledge of deep learning.
With all of this content, how do libraries secure the vast archives they are assembling? Many of the research libraries I’ve interacted with still use primitive IP whitelist filters and VPNs for remote access. Few have implemented any kind of true behavioral monitoring of their systems. Most have no idea that a faculty member just paid a room full of undergraduates to bulk download tens of millions of documents from a licensed resource in strict violation of the university’s legal agreement and then redistributed that content to collaborators at multiple institutions, until they receive an irate phone call from the vendor notifying of forthcoming legal action. This is another area where the cloud can help, offering capabilities like sophisticated identity management, behavioral monitoring and transparent 2-factor authentication.
To date few research libraries or web archives have been priority targets for nation state attackers, largely because the content they hold tends not to be as economically or politically valuable compared with other targets. However, the dearth of major documented attacks is likely due to the relative obscurity of web archives today in which much of the general public are still unaware that there are sites that preserve past copies of web pages. If a major browser like Chrome were to add transparent integration with major web archives, bringing the notion of “undeleting” embarrassing content into the mainstream, it is likely that cyberattacks against those archives would ramp up accordingly. Today a repressive government can simply order removed any web content that poises an embarrassment or order that a news outlet make changes to an article to change its stance towards a topic or political figure. Imagine if as you browsed the web your browser alerted you that the page you’re currently reading looked much different a few hours ago and that several paragraphs about a corruption scandal involving the finance minister of your country recently vanished. Or if you received an alert that a number of other pages on the site on the same topic have disappeared in the past week, all while letting you view the deleted content. In such a world nation states are much more likely to view web archives as a significant target. Once again, this is where the commercial cloud can help with its vastly superior security posture and investments.
Putting this all together, today’s commercial cloud has much to teach the research library and web archiving communities, from how to build and secure robust global-scale infrastructure to how to make all of their content searchable to how to bring innovative technologies like deep learning to bear to move beyond the half-century-hold keyword search. In the end if research libraries and web archives want to maintain their relevance into the coming years they must reinvent themselves for the digital era and to do that they should take a closer look at the commercial cloud that has reinvented the business world.
|
d7e6507fffb2267cd4ba658ba60705b3 | https://www.forbes.com/sites/kalevleetaru/2017/05/01/how-facebook-secretly-turned-us-all-into-digital-lab-rats/ | How Facebook Secretly Turned Us All Into Digital Lab Rats | How Facebook Secretly Turned Us All Into Digital Lab Rats
Silhouettes against the Facebook logo. (Chris Ratcliffe/Bloomberg)
In an eerie echo of Facebook's widely criticized 2014 study that manipulated users' emotions and earned a rare Editorial Expression of Concern from PNAS, yesterday The Australian broke the story that Facebook's Australian office had conducted secret emotional research on more than 6.4 million Australian youth, including 1.9 million high schoolers as young as 14 years old to estimate when those children were at their most vulnerable, experiencing feelings of being “worthless” or a “failure” as part of research conducted for marketers.
Three years after the global outcry over the company’s treatment of its users as glorified lab rats to be experimented on at will and after reassurances that the company was changing how it handled sensitive research, the company appears to be back at it again, but this time extending its reach to its most vulnerable populations, data mining the activities of 1.9 million “high schoolers” with an average age of 16 and as young as 14, 1.5 million “tertiary students” aged around 21 years old and 3 million “young [professionals] … in the workforce” as reported by the Australian.
The company provided a response to the paper that all data used in the research “was aggregated and presented consistent with applicable privacy and legal protections, including the removal of any personally identifiable information.” Yet despite these reassurances that the data was completely deidentified and aggregated to a resolution that protected user privacy, the company noted in its confidential research report that it was “not publicly available” and in particular “shareable under non-disclosure agreement only.” When I asked Facebook for comment on why, if the resulting findings were based on completely deidentified and aggregated data that presented no privacy implications or risks of any kind, were the findings considered so sensitive that they required a non-disclosure agreement to view, the company declined to comment, pointing to its public statement and saying that short post was its “official response to this matter.”
As I wrote last June, the era of big data has brought with it a profound shift in research ethics. Gone are the days when university Institutional Review Boards (IRBs) brought together informed experts to carefully evaluate research proposals for the balance of intellectual rewards versus risk of harm to participants. Today proposals that involve large social sciences “big data” datasets, especially social media datasets, are routinely exempted or simply not submitted to IRBs at such a rate that entire disciplines like computer science have historically had little contact with IRBs and the major journals of their fields explicitly do not require or ask about IRB approval to avoid enforcing “irrelevant” and “unfair” restrictions on research by requiring ethical review of social media data mining.
At the time of Facebook's 2014 study, I noted that that the negativity publicity surrounding its study was not likely to curb the company’s conduct of future such studies, but rather that those studies would simply happen in secret and potentially become ever more intrusive, as the focus became solely on the commercial goals of monetizing its users without the limiting constraints of considering the optics of how the research and its findings might be received by the public and broader research community.
Indeed, despite the outcry over Facebook’s 2014 study, including the editor of the PNAS journal raising concerns over the ethics of that research, the same editor told me last year that the journal would not rule out publishing the same study if it was submitted today, while the academic institution involved with the study had made no substantive changes that would have prevented the research from being conducted today. For its part, while claiming it had greatly strengthened its ethical review process for research in the study’s aftermath, Facebook has steadfastly refused to provide any real detail into what that process looks like and has refused to make available the protocols, documentation and guidelines that govern its review process.
In its statement to The Australian yesterday, Facebook took pains to note its research review process and emphasized that the Australian study had not been submitted for review, suggesting it was what amounted to unauthorized research that had not received official approval by the company. Yet, instead of profusely apologizing and condemning the actions of its employees for conducting unauthorized research, the company said only that it would “undertake disciplinary and other processes as appropriate,” but issued an official statement to the media that lacked this language and referred to the incident merely as a minor “oversight.”
In short, the company emphasized that it has a strong ethical review process that determines what constitutes allowable research and must issue approval for any internal company research of sensitive nature conducted on its users and that in this case two employees went ahead and used their privileged access to search for the most vulnerable emotional moments of young children without ever submitting their research for review, and the company’s response is that this is not a major breach of research protocol that requires an immediate suspension and review of sensitive research across the company, but rather just a minor “oversight” not to be worried about.
Perhaps most remarkable here is that two Facebook employees could conduct research of such a sensitive nature on an explicitly selected target population of young children (which traditional university IRB boards consider a protected class that requires significant additional ethical considerations) without the company apparently ever being aware of that research. At some companies I’ve spoken with, access to aggregated deidentified user data is tightly controlled, with individuals being granted access to analytic tools only for the specific duration of their approved project, with access immediately terminated on the conclusion of the project and with a team of security personnel continually auditing the access logs to ensure all queries comply with currently approved projects, in recognition that even aggregated deidentified data can, at certain resolutions, offer incredibly sensitive insights.
In Facebook’s case this raises the question of how their ethical review board enforces compliance if two employees could just access the company’s internal data analysis tools and conduct research of this scale and sensitivity without ever submitting their proposal for review. Facebook did not respond to a request for comment on this, nor would it comment on whether it was aware of this happening in the past. It also declined to comment on whether it would finally release any of the documentation on its review process.
Most concerningly, the company also declined to comment on whether it would be conducting a company-wide inventory of access to its analytic tools to determine if there were any other similar research projects that have been performed or are currently being performed that have not received ethical review. Such an inventory would seem to be a natural automatic reaction if the company viewed this breach of ethical review as a major issue and its lack of public commitment to perform such a review reflects the company’s characterization of this as just a small “oversight” of little concern.
While the company emphasized that it has not yet used this emotional data as an advertising selector that would allow companies to target their ads to its user’s emotional states, including selling ads to young children experiencing vulnerable feelings of being “worthless,” when asked if the company would state for the record that it would not sell such ads in the future, the company declined to comment. Indeed, its public statement is careful to use the phrasing “does not offer tools” for emotional targeting instead of “does not and will never.” That in itself is noteworthy that despite pushing back so strongly on the notion that it has to date ever used emotions as a selector, the company would not simply go on the record that it considers emotional behavior sensitive enough, especially the emotions of children, that it would commit to never selling ads based on those indicators.
The company’s own advertising policies state that “Ads targeted to minors must not promote products, services, or content that ... exploit ... or exert undue pressure on the age groups targeted.” When asked whether conducting research to determine when a young child is at their most emotionally vulnerable and especially “moments when young people need a confidence boost” might conflict with this policy given that explicitly targeting someone who has been flagged as being in an especially vulnerable and suggestable state would by its very nature place them under undue pressure, the company declined to comment.
Putting this all together, we see first that the backlash against Facebook’s 2014 emotions study had little impact on the company’s conduct of future studies of its user’s intimate emotions and in fact over the succeeding 3 years the company has progressed to studying the most sensitive class of emotions on its most vulnerable users. The only change seems to be that the company conducts this research in secret now, making the results available to commercial partners under non-disclosure agreements, while the public is none the wiser. Second, despite claiming it had addressed the issues raised by its 2014 study by strengthening its ethical review process, it is clear that such research can still continue, simply by not requesting approval and, most importantly, that the company apparently lacks technical authorization and access measures to prevent unauthorized research from being conducted. Third, when alerted to the fact that two employees had conducted unauthorized research of a highly sensitive nature on a highly vulnerable population, the company’s reaction has been strikingly nonchalant, in one statement saying disciplinary measures might be taken, while in another labeling it a simple “oversight.” The company’s lack of a full-throated apology and condemnation of the unauthorized research and the fact that it declined to publicly state that it would conduct a company-wide audit of all recent and ongoing research to ensure that employees were seeking the necessary ethical approval does little to reassure its 2 billion users that it views them as anything other than glorified digital laboratory rats to be experimented on at will. Welcome to the dark reality of social media and the price you pay for your “free” Facebook account.
|
eb9550489bdc102c6ebf82bdb3f8ba37 | https://www.forbes.com/sites/kalevleetaru/2017/06/17/when-facebook-censors-journalists/ | When Facebook Censors Journalists | When Facebook Censors Journalists
Facebook's 'Like' icon. (Chris Ratcliffe/Bloomberg)
In a world in which 62% of American adults use social media as a news source and where nearly half of American adults access news via Facebook in particular, social media is increasingly becoming a critical gatekeeper in our access to the world’s journalism. Yet, the growing use of social media as a primary conduit for journalism is occurring alongside ever more active moderation and censorship by social media companies of the content shared through their platforms. As this moderation increasingly affects journalistic use of social media, what might the future hold for the fourth estate?
Last month The Guardian reported that after Pulitzer prize-winning Maltese journalist Matthew Caruana Galizia, part of the Panama Papers team, wrote a set of four posts documenting what he claimed were evidence of wrongdoing by the Maltese Prime Minister drawn from the Panama Papers dataset, the posts were deleted and his Facebook account was partially suspended for violations of Facebook’s Community Standards. Moreover, this came just before the country’s upcoming election when allegations regarding the government were swirling and which the allegations published by Mr. Galizia would likely have been of great public interest to voters in the country.
In a reflection of just how important social media has become as a news source for younger generations, The Guardian quoted Mr. Galizia as saying “I decided to start posting [on] Facebook because I was realising that very little of the information that was being published by newspapers was reaching people in their late teens and early twenties (university students included).”
In a statement of The Guardian, Facebook acknowledged suspending his account and deleting his posts and that this was not a mistake by its overburdened reviewers, but was done because his posts contained private details of specific individuals. After media coverage of its actions, Facebook stated that it was looking into the matter and working with the journalist to allow him to restore his posts.
The company did not respond to a request for comment on its actions in this case, nor did it respond to a request for it to state for the record that it did not receive any pressure from the government of Malta to suppress unfavorable information or that if it had, that such pressure did not factor into its actions here. Even if in this case the government played no role in the removal of the posts, it is not hard to imagine a case where government actors might wish to have negative information or allegations of wrongdoing suppressed and might attempt to utilize various legal or political measures to encourage Facebook to do so.
What makes Mr. Galizia’s case so concerning is that it not only occurs against a backdrop of other censorship of journalists and public figures, such as Facebook’s removal of a famous war photograph and its deletion of a post by a sitting head of state reposting the image, but that in this particular case the censorship was of what would otherwise in a different medium constitute investigative journalism directly pertaining to an imminent election and where the source material and journalist in question had already earned a Pulitzer prize for precisely such journalism.
This raises the question of what happens as Facebook increasingly mediates our access to journalism, especially its attempts to convince journalists to publish exclusively through its platform. As private companies based in the United States, social media platforms like Facebook and Twitter are not legally “public squares” nor are they bound by First Amendment protections and thus are entirely free to delete any content and suspend any users they wish, with no recourse or explanation.
In essence, Facebook’s army of moderators are in a position where they must second guess professional journalists operating under internationally accepted professional standards, often by making a decision in just a matter of seconds and without a fuller understanding of the context of the material. Internationally renowned news outlets like the New York Times and Washington Post have won countless awards over the years for unveiling government misconduct through the publication of confidential and classified information. As access to such journalism is increasingly moderated through Facebook, does this mean that the next Pentagon Papers bombshell might be banned by Facebook for violating its standards or removed at government request? If Facebook becomes the primary mechanism through which news is accessed, then it has the power to determine what is truth and what disappears down the memory hole.
Governments have long placed pressure on news outlets to suppress or water down stories that paint them in an unfavorable light, but even when one outlet agrees to bury a story, another often gets wind of it and runs it, ensuring that important stories see the light of day. When there is only one platform and one set of editors that determine the stories that the entire planet is allowed to see, that is a troubling situation.
Indeed, journalists themselves have frequently served as a check on Facebook’s power of censorship. Time and again, Facebook has deleted a post or suspended a user who tries in vain to get their post or account restored for days or weeks to no avail, only to have the post/account instantly restored the moment a major news outlet contacts the company for comment. If journalism itself was subject to the same power of censorship and Facebook could simply delete, prohibit or deemphasize posts about its censorship activities, it could very rapidly eliminate one of the few avenues of redress for its actions.
This raises the question of whether Facebook should adopt special policies regarding accredited journalists acting in their professional capacity, recognized news outlets and politicians, along with other special tiers for recognized IGOs/NGOs, etc. Different user classes would bring with them special moderation policies. An official governmental post by a head of state might be given a special exemption that it will not be deleted under any circumstances, since it reflects a sovereign government communication. Posts by journalists and news outlets might be subject to international journalism standards or similarly exempted except in extreme circumstances or legal action.
Should professional journalism be treated differently by Facebook and afforded greater flexibility than ordinary users of the platform by virtue of its role as the fourth estate acting in the public interest? Doing so would greatly reduce the risk that Facebook’s moderators could be seen as selectively suppressing news stories of great national and international interest. On the other hand, it would transform Facebook from a democracy of equals to a feudal system of elites broadcasting to the plebes.
As always, Facebook’s response to The Guardian about its removal of Mr. Galizia’s posts was that it was looking into the matter in more detail after the news media drew attention to its actions and that if it made errors it would correct them after the fact. Missing from its statement was any mention of reaching out more broadly to the journalism community to rethink once again how it interacts with journalistic use of its platform. In keeping with the Silicon Valley mindset that the Valley knows best, Facebook has steadfastly ignored the lessons and insight of the outside world, from having its content moderators use their own personal accounts that risked exposing their real identities to terrorist sympathizers to building a news feed system upon a dataset that skewed it heavily to the West.
Yet, nearly a decade and a half after its founding, Facebook’s go-it-alone mentality seems to be softening a bit as the company is “starting something new” – actually listing to its users. This past Thursday it announced its new “hard questions” initiative in which it appears to be willing to actually hear the thoughts and perspectives of its global community of two billion users. The real question is whether it actually listens or whether it goes back to business as usual.
|
911f391838d5eafbb059683a17185af7 | https://www.forbes.com/sites/kalevleetaru/2018/03/19/the-problem-isnt-cambridge-analytica-its-facebook/ | The Problem Isn't Cambridge Analytica: It's Facebook | The Problem Isn't Cambridge Analytica: It's Facebook
The Facebook logo. (Jaap Arriens/NurPhoto via Getty Images)
Cambridge Analytica garnered headlines across the world on Friday as Facebook formally suspended the company from its platform over allegations that it had improperly received and retained data on tens of millions of Facebook users from an academic researcher that had originally obtained the data legally and properly in accordance with previous Facebook developer guidelines. However, whether or not the allegations are true (the company has denied them), the singular focus on Cambridge Analytica makes for a simple meme-worthy media narrative, but the reality is that what the company stands accused of by Facebook is in fact what academic researchers, commercial enterprises, governments and even the social media companies themselves do every day with the data entrusted to them by a quarter of the earth’s population.
Perhaps the most remarkable takeaway from coverage of the 2016 election is just how starkly changed the reaction of the public and media has been to political use of data-driven election targeting since the 2008 and 2012 Obama campaigns. When the Obama campaign pushed the boundaries of precision voter targeting, pioneering techniques like peering into the privacy of American’s living rooms through their DVRs to see what each individual voter was actually watching on their televisions, the press and public cheered, hailing it as a long overdue modernization of the campaigning process and holding up the campaign’s data scientists as miniature heroes showcasing what could be done with data today. In the leadup to the 2016 election, the press and public derided the Trump campaign as apparently being data-devoid, while hailing Clinton’s campaign as picking up the data-first mantle from the Obama campaign and pushing it even further.
In short, data was good and pushing the privacy and ethical boundaries of data to monitor and manipulate voters was a positive, modernizing campaigning in line with the commercial advertising world. Even 1984-ish techniques like mining private DVR viewing habits were lauded as brilliant innovations to advance democracy, not frightening autocratic steps towards a surveillance state.
A year later in the midst of a stunning election upset and investigations of Russian influence, the tenor towards data-driven politicking has turned upside down, with a public outcry over the perceived power of data-driven campaigning to allegedly overwhelm organic public will. Though, before attributing absolute campaigning power to data, it is important to remember that Cruz’s failed presidential bid relied on the same firm and data that is credited with Trump’s win, calling into question the level of impact it actually had.
A central theme of the rhetoric and coverage of Cambridge Analytica is that it somehow violated accepted societal norms over the use of Facebook data, with politicians, regulators and major news outlets referring to it in the cybersecurity parlance of a data “breach.” In fact, this could not be further from the truth in our modern “surveillance economy.”
Facebook and other social media data has, almost from its inception, been a major data source for academic research, with little to no ethical concern for its utilization. Perhaps most famously, in 2008 Harvard and UCLA researchers released a massive dataset compiled from the Facebook accounts of an entire cohort of college students, with the full ethical and legal approval of Facebook and Harvard, full approval of Harvard’s IRB and funding from the National Science Foundation. When asked in 2016 whether its views had evolved since the 2008 release, the university never responded, despite multiple requests to different offices.
In 2014 academic researchers at Cornell and Facebook published research in which they had manipulated the emotions of three quarters of a million users and which was published in one of the top scientific journals, PNAS. As with the Harvard study, the research had been fully approved by Facebook and Cornell, with ethical review by Cornell’s IRB.
Like the Harvard study 6 years before, the 2014 emotions study generated an immense public, press and academic backlash, with Facebook stating it had redesigned its ethical review process and the journal issuing a statement of editorial concern. Yet, when I followed up in 2016 about how views had changed towards that kind of research at Facebook, Cornell and PNAS, Facebook declined to release any substantive information about its ethics panel beyond a brief overview nor would it release the protocols it uses to review research or the guidelines it requires its researchers to adhere to. As I noted at the time, “In short, Facebook would not share any details beyond its previous 2014 statement on what changes it has actually made internally, while Cornell and PNAS both noted that they have made only minimal changes. Combined with Harvard’s failure to respond to multiple requests for comment, this suggests that even in the highest-profile cases involving the nation’s top universities with strong Common Rule-compliant IRB systems and at the most prestigious journals in academia, there are few actual changes made in the aftermath of an ethical firestorm. Put another way, in spite of the outpouring of public criticism and claims that new ethical standards are needed, little actual change ever happens and after a brief period of conversation and consternation, things go back to the way they were.”
Indeed, three years later it appears little did change at Facebook, with the company’s internal researchers becoming embroiled in yet another ethical firestorm regarding psychological profiling when it emerged that they had been conducting secret research on more than 6.4 million Australian youth as young as 14 years old to determine when they were at their most vulnerable, experiencing feelings of being “worthless” or a “failure” in order to better target them for advertisers. When details of the secret research leaked to the press, the company’s response was that it had not followed its ethical review process, but that the failure to adhere to its ethical review process was merely a minor “oversight.”
As I noted at the time, “the company emphasized that it has a strong ethical review process that determines what constitutes allowable research and must issue approval for any internal company research of sensitive nature conducted on its users and that in this case two employees went ahead and used their privileged access to search for the most vulnerable emotional moments of young children without ever submitting their research for review, and the company’s response is that this is not a major breach of research protocol that requires an immediate suspension and review of sensitive research across the company, but rather just a minor 'oversight' not to be worried about.”
More remarkably, that two Facebook researchers could conduct research on such an extraordinarily sensitive topic on such a vulnerable population and tout that research to advertisers without ever having their work ethically reviewed and, according to the company, without its formal knowledge and approval, suggests Facebook has woefully inadequate central controls over the use of its users’ data by its own researchers. Indeed, its reaction to the unauthorized research, labeling it as a minor “oversight” and especially its refusal to comment on whether it had identified other unauthorized research and whether it would be tightening its data access controls, as troubling at best.
In this light, Cambridge Analytica’s alleged use of Facebook data for voter targeting pales in comparison with the ways in which Facebook itself exploits its private user data for its own purposes and those of the researchers that collaborate with it.
It is also not the first time that the company has found itself under scrutiny for commercial targeting based on its data. In 2016 Facebook and Twitter terminated social media monitoring company Geofeedia’s access to its data streams after the ACLU published a report documenting how it was being used to surveil lawful protest activity in the United States. Facebook predictably denied knowledge of how Geofeedia was using its data, yet when the ACLU discovered an email referencing a confidential agreement between Geofeedia and Facebook to provide the company with additional Facebook data, Facebook declined to comment further. As I noted at the time, “it is difficult to imagine that neither Facebook nor Twitter had any idea of any kind that one of their licensees was using their data to provide surveillance capabilities to law enforcement. Geofeedia is far from a James Bond cloak-and-dagger defense contractor operating in the shadows – it is a widely known commercial company widely and openly touting its capabilities through numerous case studies and receiving considerable media coverage that specifically discussed its law enforcement clients, while the FBI openly issued an RFP for its services as part of its larger interest in social monitoring. In short, even the most basic of web searches easily turned up that law enforcement was a client of GeoFeedia’s and this thus raises the question of how Twitter and Facebook never noticed that a high-profile subscriber of their services, especially one that allegedly was signing additional special licensing agreements with Facebook, would never have appeared on their radars.”
Of course, academia has been a massive consumer of social media data as well, employing its wealth of data for all sorts of extraordinarily sensitive and ethically questionable research.
Just six months ago two Stanford University researchers set off an avalanche of ethical concern with a paper that used an AI model to estimate a person’s sexual orientation based purely on a photograph of their face, raising concerns of imprisonment and even execution for LGBT individuals living in repressive countries. The paper was published in an APA journal and reviewed full ethical approval from their IRB, reinforcing the common theme of top journals and institutions as freely accepting sensitive research on social data. Demonstrating the stark divide between the written ethical policies of academia and what actually happens in practice with social data research, Stanford had previously provided comment to me documenting what was one of the most rigorous ethical review processes at an American university, including specialized privacy reviews and typically requiring a signed legal agreement with data vendors, along with noting explicitly that bulk downloading data from a website in violation of its terms of service was absolutely prohibited. Yet, when asked for comment on why those policies did not appear to have been enforced for this study, the university declined to offer comment, saying that it was not permitted to do so.
Much as Cambridge Analytica used data gathered from a Facebook personality quiz, the Stanford research also incorporated data from Facebook, gathered through a personality quiz called myPersonality, described as “a popular Facebook application that allowed users to take real psychometric tests, and allowed us to record (with consent!) their psychological and Facebook profiles. Currently, our database contains more than 6,000,000 test results, together with more than 4,000,000 individual Facebook profiles.” As of October 2015, more than 7.5 million people had taken the questionnaire and the resulting data had been used by more than 200 researchers, including at least one Facebook data scientist. That Facebook's own researchers have used such external personality quiz datasets is intriguing in light of its concerns with Cambridge Analytica's activities.
Despite declining to comment on the questions posed regarding the study’s ethical considerations, Stanford did provide a brief statement from one of the authors regarding its use of more than 3 million images from Facebook: the images are “secondary data that we have obtained from myPersonality app. The users gave the app permission to use their data.” In short, the users clicked a button authorizing their data to be used for research, so they knew what they were getting into and that single personality quiz they took between 2007 and 2012 and long forgot about entitles researchers a decade later to continue to use their information for a publication in 2017.
One of the authors was also the lead author of a 2013 PNAS study that demonstrated that Facebook data can be used “to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.”
Not a day goes by that I don’t see a request on an academic mailing list from a university researcher looking to extract a large volume of material from Facebook or announcing a new tool to make bulk extraction of user information from the platform easier. Any change the platform makes to restrict bulk access to its data is met with cries of foul. On the sidelines of any academic social sciences conference can be found any number of academics offering to ship bulk extracted Facebook data to fellow researchers.
Moreover, even when social media platforms become aware of explicit violations of their terms of service by researchers, it is unclear what consequences, if any, those researchers face. When asked about a study last year that estimated the political leanings of Twitter users in direct violation of the company’s explicit prohibition on political profiling, a company spokesperson confirmed that the research was a clear violation of its policy, that academic research was not in any way exempt from its policy and that the researchers had not received any authorization or permission from Twitter to conduct their research. Yet, despite confirming that the research was a direct violation of its policy, when asked whether it would take action against the researchers or any other researchers that violated its policies, the company did not respond.
For its part, Facebook modified its terms of service in May 2012 to enshrine its right to conduct research on its users, noting that “companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word 'research' or not.” Yet, when asked whether the company would ever consider offering an option for users to opt-out of having their personal data used by Facebook for research, the company never responded.
Putting this all together, casting Cambridge Analytica as a villain gone rogue with “breached” data from Facebook offers a simple cathartic response to a world the press and public are struggling to understand, but it could not be further from the truth. Everything the company stands accused of (which again it strenuously denies) are simply what academics at universities all across the world do every day with the full knowledge and approval of their ethical oversight boards. Granting agencies like NSF routinely fund the research, IRBs approve it without question (typically treating it as “exempt”) and the top journals publish it. Indeed, Facebook itself has its own active research program, which as the Australian press leak demonstrates, is conducted under strict secrecy and according to the company itself, may occur without its knowledge or approval. Even when violations are brought to their attention, the companies will at best merely confirm the violations, either dismissing them as minor “oversights” or in the case of external researchers, apparently taking no action at all. It is little wonder then that social media data has found such a place in modern research and may have played such a pivotal role in the 2016 election, or as the Cruz campaign discovered, perhaps no role at all. All in all, it is remarkable how in the space of just 8 years data has gone from hero to villain in the world of campaigning, raising questions of whether this will be a tipping point against social media or just another brief bump in the road to quickly wash away. In the end, instead of holding Cambridge Analytica up as a villain, if society at large has concerns about how their Facebook social media personas can be used to monitor and potentially manipulate them, they should take a closer look at the platform that makes it all possible: Facebook.
|
2995f7ec0e18ad74c8be4aff8a3d485e | https://www.forbes.com/sites/kalevleetaru/2018/04/05/the-data-brokers-so-powerful-even-facebook-bought-their-data-but-they-got-me-wildly-wrong/ | The Data Brokers So Powerful Even Facebook Bought Their Data - But They Got Me Wildly Wrong | The Data Brokers So Powerful Even Facebook Bought Their Data - But They Got Me Wildly Wrong
Mark Zuckerberg speaks to press and advertising partners in 2007. Two weeks after announcing its new... [+] marketing program, the company faced complaints from users surprised to find information about their online purchases added to their personal news feeds. (AP Photo/Craig Ruttle, file)
It all started with an AARP mailer suggesting that as a 65-year-old, I could really benefit from an AARP membership. Then two more mailers arrived, one a week, from different AARP mailing centers. Obviously, I had made my way onto a mailing list somewhere. This got me wondering, though, as to how a mid-30s-year-old had made it onto a 65-year-olds mailing list and just what data the myriad data brokers in the shadows hold on us. More importantly, as society becomes ever more data-dependent, was this 30-year age error just an isolated mistake or are these datasets (which are so dominant that even Facebook licensed them until recently) just plain wrong? The answer is that at least for me the data is so laughably wrong it is on the one hand humorous, but on the other hand frightening for a future in which the data bought and sold about us moves beyond targeting mailers and ads into actually influencing decisions that companies and governments make about us from the loans they offer us to court sentencing decisions.
My journey into the world of data brokers began with that series of AARP mailers last April. Curious as to where AARP had gotten my name from, I reached out to the association for more information. A spokesperson quickly apologized for the mistaken mailers, saying they had removed my name and that they purchased a mailing list of over-50-year-olds from DSA Direct, LLC that contained my name. When I reached out to DSA Direct to ask where they had gotten my name from and whether they had comment on the quality of their data, the company refused to provide an on-the-record response.
Somehow DSA Direct, LLC, a small company in Denville, New Jersey had built a profile on me and I had no right to inspect or correct the information they held on me nor any ability to force them to give me more information about how they were commercializing my information.
For all the outcry about Facebook the last few weeks, at least it actually allows you to see part of the data it holds on you. In the world of data brokers, you have no idea who all has bought, acquired or harvested information about you, what they do with it, who they provide it to, whether it is right or wrong or how much money is being made on your digital identity. Nor do you have the right to demand that they delete their profile on you.
Until I got the AARP’s mailers, I had never heard of DSA Direct, LLC, yet once I knew they were making money selling information about me, I had no legal right to force them to provide me everything they held on me, to delete all of that information or to correct any wrong information.
In the course of researching where DSA Direct may have acquired its information from, I came upon Facebook’s Partner Categories program, in which as far back as 2013 it began licensing information from companies like Acxiom, Epsilon and Oracle Data Cloud to allow precision advertising targeting of its users based on the activities they perform offline or online outside of its walled garden. In essence, Facebook recognized that many of the most useful data points on our daily lives come not from the utopian image of perfection we project on Facebook, but from the actual mundane reality of our daily lives, from what we purchase at the grocery store to where we live to our financial status. These brokers know the real us, rather than the pretend us and asking them to stop monitoring us and delete everything they hold about us can be hard.
Facebook historically offered links to the websites of each data vendor it licensed information from, allowing users to request that the vendor send them a list of all data fields it held on them, so I took them up on those offers.
TransUnion refused to comment on what data it provided to Facebook. A Facebook spokesperson said that in addition to its financial products, TransUnion also offers a range of marketing products that Facebook uses.
Epsilon responded back with a letter saying that it did not hold any information on me in its marketing databases. The company clarified that it estimates that less than 2% of reports are blank, but it did not answer further questions regarding why my particular report was blank other than to offer that in general “if a household has limited participation in loyalty programs, for example, we would have limited consumer information.” It is also unclear whether the reports to consumers really reflect all of the information Epsilon holds on them. Gizmodo reported last year about a person whose Epsilon profile was used to send them a targeted medical mailer. Yet, when they similarly requested their Epsilon profile, there was no information suggestive of that medical condition at all. When I asked Epsilon how that situation might have arisen and if it was possible they were not sharing all of the information they held, a spokesperson responded only that “Epsilon’s databases have consumer information related to household purchases, demographics and interests, and self-reported information about consumers and their households” and that it was possible that “an unrelated piece of information from our database suggested the individual may be interested.”
Requesting a copy of the offline records Oracle Data Cloud held on me required my sending them a “copy of a state issued photo ID or passport.” In other words, the data Oracle holds is sensitive enough in its view that users have to prove who they are to request it, while it itself doesn’t even have to let those users know it has their data.
When asked how to send Oracle the required scanned copy of my driver’s license or passport, the company responded that I should just email it to them in the clear with no encryption or other protection. It offered that I could redact the date of birth and ID number, but that it needed the additional fields to “verify your identity.” It did offer that if I was not comfortable emailing my information in the clear, I could alternatively mail the scanned ID via USPS. When I noted that it seemed absurd for any company in 2017 to ask users to email a scanned copy of their driver’s license or passport to an unknown email address in the clear with no encryption or other protections and that this suggested the company did not fully understand modern data security practices or fully appreciate the sensitivity of user data, the company did not respond to repeated requests for comment.
After mailing the required driver’s license scan to Oracle, I eventually received back an envelope containing a list of 108 categories the company believed I belonged to (it can take up to a month to receive the results). A careful line-by-line review showed that 85 of those categories (78%) bore absolutely no resemblance to my life at any point. For example, I have never stepped foot in a Forever 21 or a dressbarn, a Sephora or an Alex and Ani, nor have I ever purchased from them online. I don’t buy children’s lunchbox meals, shop at lululemon athletica or purchase feminine care products. I’ve never bought imported beer in my life nor do I shop at Victoria’s Secret. Yet, according to Oracle, these were all categories for which I would have high relevance to marketers.
When I noted all of this to Oracle and requested comment on why it believed my profile might be so woefully inaccurate, the company did not respond.
Oracle also tracks users online through its BlueKai Registry offering. Wading through the detailed profile it had compiled of my online life yielded accuracy little better than its offline view. According to Oracle, I’m a “cosmopolitan professional single,” a “successful single parent” and a “golden grandparent” all at once, as well as being over the age of 65. I have an obsession with women’s clothing, women’s cosmetics and women’s hair products, I buy a huge amount of baby products, especially diapers and baby food, I eat seafood weekly and I’m simultaneously heavily purchasing retirement services and deeply into the young professional parenting life. In short, I’m a world of contradictions that bear not even the most remote resemblance to reality.
In reality, while they got the single professional living in a city part right, I am not a parent nor a grandparent nor over the age of 65, I've never purchased women's clothing, cosmetics or hair products, I've never bought baby products, I'm allergic to seafood and I'm about three decades from retiring.
Unlike their offline records, for which they did not comment, when asked about the accuracy of their BlueKai Registry and why its records were so wildly off for me, the company responded that “the BlueKai Registry pulls in data from the BlueKai Marketplace based on browsing activity, interests, demographics and purchase information for that browser and computer” and that “individuals in communications/research fields have additional interests tied to their cookie based on the fact that a portion of the data in the BlueKai Marketplace is based on online-inferred interests and individuals in these roles tend to do heavy online searching on various topics which then would qualify that person for a wide assortment of segments.”
Suddenly it all made sense and yet none of it made sense. My browsing history would seem to support few of these categorical conclusions. At the same time, it is unclear what sites result in what labels. Perhaps a visit to a financial news website for the latest driverless car industry updates flagged me as a retirement-minded senior, while checking on Amazon’s Echo Look tagged me as interested in women’s fashion, cosmetics and hair products. It is simply impossible to know.
Yet, what is perhaps most intriguing about Oracle’s response is the notion that it is not unexpected for those in “communications/research fields” to have profiles that are wildly unrepresentative of them due to their propensity to search across a wide range of topics. The implicit inverse of that statement is the overwhelming majority of Internet users search for and consume such a narrow slice of information that it reflects their interests and behavior well enough to precisely target them for advertising.
Is it really true that as a human society, we are each so trapped in our tiny filter bubbles of interests that we never reach out to see what the rest of the world is talking about and interested in? The entire industry of web tracking is prefaced on this simple assumption and geographically speaking there is certainly strong evidence to support it. At the same time, it offers a sad commentary that in a world of information riches, where nearly every imaginable piece of information lies ready at our fingertips, we so rarely venture beyond a narrow set of interests that remote companies can generate precision psychological profiles on us merely from the information we seek out and consume.
Notably, the company offers individuals the ability to opt-out of receiving future mailings based on their data and to correct entries they believe to be in error, but does not offer users the right to be forgotten, to demand that the company delete every single piece of information it holds connected to that person and submit a signed legal document formally certifying the deletion. After all, why should we have any rights to control what is done with our own data?
Acxiom offers a website, aboutthedata.com, that allows individuals to view the information it holds on them. Ironically, the site offers in defense of data-driven profiling that “we have come to expect companies will make their interactions with us personal. We no longer want to receive mass marketing ... it’s intrusive and wastes our time. That’s why companies want to use data about you to personalize and shape your experiences with them.” Perhaps in light of the recent uproar over Facebook’s use of personal data, the company might wish to rethink this marketing motto, especially the assertion that hyper-targeted advertisements that know everything about us are less "intrusive" than generic untargeted ads.
With respect to where it gets its data, the company offers that “offline marketing data comes from publicly available information such as your name, address, birthdate and census data, information from surveys and questionnaires or product registrations/warranties, and information from other data providers.” As an example of the kind of targeting it makes possible, the company offers that its data allows advertisers to see you as “a 40 year old man living in an affluent suburb with two young children, you might be interested in buying a luxury automobile with a high safety rating. Based on your online shopping on their site for camping equipment, they might further deduce that you are more interested in an SUV rather than a sports car and that you may also be in the market for new auto or homeowners insurance.” In a glimpse of just how quickly our data is resold and commercialized, Acxiom notes that many companies resell your data, using the example of “Susie orders a pair of tennis shoes for herself and her toddler daughter over the Internet and general information about her purchase is shared with partners of the company she bought her shoes from. The core data that is shared is: Susie is interested in tennis shoes. She has children present in her household. She purchases via the web. She looks at advertising via the web, and she lives in the northeast.” In other words, any interaction you have with any company in your daily life might result in your data being resold behind the scenes without your informed consent.
A company spokesperson repeated this, offering that “many consumers want to benefit from marketing that is more tuned to their needs and interests” rather than seeing generic advertisements that have not been custom tailored based on their activities. In light of the Facebook / Cambridge Analytica story, one intriguing data point provided by the company is that of individuals who visit its aboutthedata.com website, 37% edit their data to correct mistakes in it and just 6% request that their information not be used in future to send them targeted mailing campaigns.
Those numbers are quite fascinating in light of the broader discussion of privacy over the past month. Users who were informed and motivated enough to register on a data broker website to access their profiles are a very narrow self-selected group of people and it is telling that apparently 94% of those users choose to continue to allow Acxiom to commercialize their information, rather than trying to take back control of their information. Perhaps we really do not care about privacy after all?
With respect to data sourcing, the company offered that its data “is derived from a variety of sources including publicly available data, self-reported data and data from commercial entities (private data sources where the consumer has received notice and has a choice about the sharing of data), each with different degrees of accuracy.” I asked the company, in light of the Cambridge Analytica story, whether it believes that its use of “private data sources where the consumer has received notice and has a choice about the sharing of data” truly constitutes informed consumer consent. In particular, I noted that Facebook’s defense centered on the fact that users had received notice that Facebook could provide their data to others and that they had a choice to not use Facebook. When asked whether its view on consent had changed in light of this, the company did not respond.
Like Oracle, Acxiom offers individuals the option to ask to be excluded from its marketing products, but notably does not offer individuals the right to demand that their data be completely deleted from its servers. I asked the company why it does not allow individuals to request that it delete all data it holds on them and add their name to a blacklist that prevents it from receiving information about them in future. Further, given that the company does not allow individuals to see where each data point it holds on them came from, there is no way for a person to on their own attempt to request that every one of the myriad source companies delete their data and thus I asked why Acxiom does not at the very least offer individuals the ability to submit a deletion request to every provider from which it receives data. Unsurprisingly, the company did not respond.
As with Oracle’s data, Acxiom’s profile on me was laughably inaccurate (83 out of 113 fields, 73%, were wrong) and in many cases the sources of information made no sense. According to the company I am married (false, but it said it got this from public records, retail activity and self-reporting) and I have three adults living in my apartment (again, false, but it claims I self-reported this at some point). I apparently purchased a $750,000 home in 2015 (I wish), have all sorts of credit cards and credit lines that were news to me, subscribe to services I didn’t even know existed, love wine (I don’t drink wine) and even own a dog I had no idea existed. It’s truly amazing how much you can learn about yourself that you never knew!
In a call with Acxiom’s Chief Privacy Officer and Global Executive for Privacy and Public Policy, the company emphasized that transparency is absolutely critical to them, yet when I asked where they acquire their data from, the company refused to provide any detail, arguing that it was a trade secret. They did note that they are strictly a compiler of data offered by other companies and do not produce any original data of their own. They further noted that some of their sources are themselves aggregators and thus data can be multiple degrees away from the original source. The company also took issue with the use of the phrase “profile” and suggested it prefers to call its data “portraits of persons active in the marketplace.”
It also emphasized that it made great efforts to ensure that all of its data was used ethically and that it had a team of professional ethicists help develop its data use guidelines that its employees used on a day to day basis to decide whether to approve any given use of its data. However, when pressed on whether it would be willing to provide a written copy of those guidelines, the company declined to do so.
The company has a considerable number of financial-related data points about consumers (in my case they were woefully inaccurate). When asked whether it applies a higher ethical standard for approving use of these fields, the company responded that it would approve any use it felt was legitimate. When asked whether it would approve a known predatory loan company using its financial indicators to target vulnerable populations, it said it would depend on the loan company, but declined to say it would outright prohibit such use.
In terms of scale, the company noted that it has records on 95% of the US population, attesting to the immense scope and reach of the data broker industry.
Returning to the question of why its records on me were so woefully inaccurate, the company continually emphasized how accurate its data is and that accuracy is front and center to its work. Yet, when I went down the list of its profile for me and pointed out all of the errors, the company acknowledged that in my case it seems to have badly failed. When asked where it might have gotten all of the wrong data points from, the company declined to comment, again citing trade secrets.
As a simple example, I asked why the company listed that I had purchased a $750,000 home in 2015. I noted that the entire 20-story 300+ unit apartment complex I live in had been purchased by a real estate investment firm in the month Acxiom believes I purchased a home, suggesting that it had confused the purchase of an entire 20-story building with me purchasing a home. When I noted that Zillow and other sites all correctly listed my building as an apartment complex and thus it should have been easy for Acxiom to recognize that the sale was of the building, not my unit, the company agreed that it should have done better. I noted that it called into question the quality of their data if they couldn’t recognize that a simultaneous transfer of ownership of 300+ units at the same address listed as an apartment complex likely did not indicate that 300+ people all coincidentally purchased homes on precisely the same day. The company agreed this was an obvious case it should not have made a mistake on, but again repeated that it believed its data was accurate overall and that my profile was an anomaly, though it was unable to offer any suggestions as to why my profile alone would be so mistaken.
The company did, however, emphasize that I was only seeing its marketing dataset and that its separate financial fraud dataset was vastly more refined and curated, though it did not provide any further detail on what made its fraud dataset more accurate or why it did not spend greater resources on refining its marketing data, nor was I able to evaluate whether its claims of greater accuracy were true.
While I could certainly be an anomaly, a Reuters reporter last month confirmed that his Acxiom profile was also highly inaccurate in many of the same ways, suggesting I am not the lone bad record in their massive archive.
Accurate or inaccurate, this raises the question of how companies make use of Axciom’s vast data holdings. Acxiom said that typically companies request mailing address lists for individuals matching certain criteria and that these were provided fairly readily. On the other hand, email addresses were not typically sent to most customers – Acxiom would contract with a third-party mass email vendor to send emails on behalf of the customer rather than turn the email lists over to them.
This raises the question of just how Facebook was using Acxiom’s data. Acxiom noted that “sophisticated” customers are typically permitted to mass ingest the portions of Acxiom’s database they desire into their own servers. When asked whether Facebook specifically had been permitted to mass ingest its archives, Acxiom responded that “we do not disclose the details of our commercial agreements. Clients and partners may license our full suite of data products, which include models like ‘outdoor enthusiasts’ that are created through a process that includes refining our core data.” When asked whether Facebook had requested Acxiom’s complete holdings on all 95% of the US population it had records for, the company again said it would not comment on its licensing agreement with Facebook.
Given that all of the companies had refused to comment on precisely what data products Facebook was licensing from them and how it was consuming them, this left Facebook as the only source to understand how and why it was using all these data brokers. In particular, a major question raised by Facebook’s extensive use of third party data brokers was what value it saw in them. After all, a key part of Facebook’s value proposition to advertisers is its ability to leverage user behavior to target users with a precision simply not possible through any other platform. If Facebook’s advertising machine was merely based on data brokers, rather than its own data, this would seem to call all of those claims into question.
When asked about the history of its use of third party data brokers, a Facebook spokesperson said last Spring in a phone interview that the company originally began its targeted advertising program by relying strictly on its own data archives of what users did and talked about on its platform, known as Facebook Native Categories. The spokesperson further emphasized that only public activities are mined for targeted advertising – private messages are not examined.
Over time, Facebook’s advertisers made it clear that while they found Facebook’s own proprietary categories interesting, in practice they wanted to be able to use the same advertising selectors on Facebook that they use for all of their other advertising activities offline and on other websites. This meant that Facebook would need to license the marketing datasets of the major data brokers, which it did in 2012-2013.
For each Facebook user, the company computes a hash code of the person’s phone number, email address and other major identifiers and transmit those to Acxiom, Oracle, Epsilon and the other data brokers it works with and requests that they return all available marketing segments they offer for that user. By relying on unique indicators like phone numbers and email addresses, Facebook doesn’t have to worry about getting the wrong information for common names like “John Smith.”
All of Facebook’s agreements with its data vendors are one-way in which the companies send Facebook all the information they hold on each user, but Facebook does not share any information back with them. The company emphasized that this meant that your Facebook activity stayed on Facebook and did not become something that the third-party data brokers could resell.
When downloading your data from Facebook, you will often see a list of advertisers that have your contact information. The company clarified that this meant those advertisers had somehow acquired my contact information on their own outside of Facebook and had provided that contact list to Facebook to request that I see their advertisements. It emphasized that at no time does it provide user information back to those advertisers.
Facebook noted that advertisers can see city-level information on where an ad is being both viewed and clicked on, allowing high resolution geographic insights into where ads are resonating. When asked about whether the ability to see where an ad is being viewed allows malicious advertisers essentially a backdoor into Facebook’s data to create city-level maps of where certain demographic characteristics (including those computed from Facebook activity) are found, the company acknowledged this (indeed, this is a common use case I’ve heard from a number of organizations), but noted that city-level aggregation is a high enough aggregation that it does not expose any detail of individuals.
When asked why Facebook purchases all of this data from third party data brokers, especially given the unknown provenance of where it all comes from and how inaccurate it was in my case, the company responded that it had no choice – that using third party data brokers is just how advertising is done today. That advertisers today expect to use the same set of selectors provided by the major data brokers across all of the platforms they advertise on and so to be competitive Facebook had to license those datasets.
When pressed on why its own marketing selectors, those that it computes from its users’ activity on Facebook, were not enough for advertisers given the immense accuracy and targeting power commonly attributed to them in the press, the company again emphasized that advertisers want uniformity in their selectors, that they had invested heavily over the years in finding the demographic and behavioral selectors from the major data brokers that worked best for them and they wanted to be able to use those same selectors on Facebook.
When asked why Facebook didn’t simply develop its own selectors based on its own data that offered the same demographic options (in essence reverse engineering them), instead of licensing third party data, the company noted that data brokers offered a wide range of indicators that were based on users’ offline lives that could not be proxied as accurately using strictly its own data that captured only a portion of their activities.
In light of the “miracle”-level power attributed to Facebook data in the Cambridge Analytica discussion (that behavioral data derived from Facebook activity is so powerful that it could sway a presidential election), it is certainly intriguing that so many of Facebook’s advertisers used behavioral profile data held by traditional data brokers, rather than Facebook’s own behavioral and interest indicators computed from actual user activity on their platform. If Facebook's selectors were so incredibly powerful, it certainly seems that most advertisers would have switched over to them, rather than sticking with the traditional data broker selectors.
I pressed the company in the 2017 interview about its views on data privacy and its users’ rights to understand what data Facebook was acquiring about them, how it was using all of that information and how advertisers were using that information to target them.
I asked the company whether it felt that purchasing all of this third-party data about its users crossed some sort of ethical or moral line. Selling ads based on what a user does on your website has become the accepted way of the web, but mass purchasing every available data point about what your users are doing everywhere else on the web and in all of their daily life they live away from your platform, especially sensitive and intimate activities that they take great care to ensure do not appear on Facebook, might generate unease in a portion of its user community.
The company’s response? That everyone else does it. Facebook emphasized that all major advertising-supported platforms purchase the same exact data, it was simply the only one who was being transparent and upfront about it, letting users know explicitly that it was buying their data, instead of keeping the data purchases in the shadows like others. The company emphasized that it was being held in the media as a scapegoat for societal concerns about what is done with all of their data, but that such criticisms were unfair because every company did the exact same thing, but everyone else just does it in the shadows quietly, while Facebook feels it is important to let users know about all of the data it is purchasing about them.
In truth, Facebook is right that the behavioral and demographic profiles held on us by major data brokers and the myriad smaller vendors and brokers they license from form the backbone of modern targeted advertising. The amount of data that can be commercially purchased about us today is at once breathtaking and absolutely frightening, at volumes and resolutions that would make Orwell proud.
Facebook is also right that few bulk purchasers of data broker products are forthcoming about the information they acquire. Moreover, the kinds of organizations that buy such data would surprise many people. Anyone who has graduated from a major US university and achieved any degree of financial success would be absolutely astonished to see the thick biographies compiled and purchased by their alma matter about them. I’ve personally held in my hands university reports more than a 100 pages long, miniature intelligence dossiers that compile every available data point on a high net worth alumnus. Everything from marriage, divorce and birth records to property, boat, car and plane records, auction records, art holdings, any available estimates of the person’s financial status, a breakdown of liquid and illiquid assets and often interviews with faculty and any other contacts they had while at the university regarding interests, possible friends and contacts and anything they might have said or done while on campus that could shed light into how best to motivate them to donate. Such dossiers often include extensive behavioral and psychological estimates of influence points that might increase their likelihood to donate, suggestions of activities to sway them and extensive estimates on wills and asset flows to children and relatives. In short, we are being profiled everyday by organizations we don’t even imagine – at least Facebook is being upfront about buying our data.
In light of the Cambridge Analytica story, it is certainly striking that just a year ago the company was emphasizing that it mass purchased third-party data broker records because that’s just what everyone does and to compete in the modern advertising industry you simply have to. When pressed on whether Facebook might see itself as perhaps taking a moral stand and saying that even though everyone else does it, perhaps they should set an example and not do it, the company simply repeated that to be competitive in today’s advertising world, you had to buy all that data. It is amazing to think that in Facebook’s view last year, we had reached a point where if you’re a business selling ads today, you have no choice but to engage with the data brokerage industry and buy all their profiles that commercialize us.
I briefly walked through some of the major errors in the Oracle, BlueKai Registry and Acxiom profiles on me and noted that more than three quarters of their information about me was completely and utterly wrong. I then asked Facebook what it would say to its advertisers that the data they use to buy ads on its platform can be so wildly inaccurate. The company’s response? I would need to talk to the data brokers – Facebook just licenses their data and provides it as-is without any correction or cleanup. To Facebook’s point, its job in this case is merely to provide the exact same targeting capability, good and bad, as advertisers use everywhere else.
It is noteworthy that both Oracle and Acxiom’s data was more than three-quarters wrong, wildly so in many categories, suggesting that this is an industry-wide issue, rather than problems with a single data broker. Of course, the way in which aggregators license from aggregators which license from aggregators mean that any given wrong data point can find its way into countless observed and computed metrics held by countless brokers. The fact that many of the common errors in my profile were mirrored in the errors a Reuters reporter found in his own profile suggest this is indeed a broader issue. Though perhaps, as Oracle claims, the issue is that profiles for reporters and researchers are inaccurate due to their wide range of interests and research.
It would be fascinating to see the results if every person in America requested their records from the major brokers and reported the error rate for their specific profile. Perhaps Pew, Propublica or another major polling/research organization might consider running a statistically representative experiment asking a selected cross section of people across the US to request their reports and detail the errors, allowing for the first time an independent external audit of the accuracy of the data brokerage industry.
This brings us back to the question of transparency. Each of the data brokers refused to comment on the specific sources they acquire their data from, considering it a trade secret. While each offers consumers the option to ask not to be included in future mailing lists, none explicitly advertised the right of consumers to demand that the companies delete every data point bearing on that consumer and to certify that destruction in writing. After all, why should we have any rights at all to our own information and who can possess and commercialize it?
Facebook itself emphasized that transparency is central to its advertising business. It noted that users can click on the upper right of any ad to see why it was shown to them and the specific marketing segment they were categorized under. It will also show them the vendor (Acxiom, Eplison, Oracle, etc) that the segment was purchased from and show users how to contact the vendor to ask that the segment be corrected.
I asked Facebook why users could only see the marketing data it held on them when it was used to target a specific advertisement. I might well click on every ad I saw and still only know a small fraction of the marketing segments Facebook believed I belonged to. Why didn’t the company offer a page somewhere that any user could go to that would show absolutely every single marketing segment the company had about them from any source, ranging from its own proprietary selectors to every data point it had purchased from any outside source. While the company’s “download your data” service displays some advertising selectors, the question was why Facebook didn’t allow its users to see absolutely everything advertisers could use to target them?
Each time I asked this question, the company responded that it was the most transparent about buying its users information from data brokers, but did not actually answer the question of why it didn’t want to be even more transparent by showing them the totality of all that data it had bought, rather than merely offering small glimpses, one ad at a time.
The company offered that their focus was on ensuring users saw the most targeted ads that were as reflective of their interests and behavior as the company could do. I noted that if providing the best possible ads was its focus, then allowing users to see the complete inventory of all marketing segments the company held on them would allow them to correct any errors and ensure that the ads they saw were even better. That in my case, three quarters of the selectors from at least two of its brokers were completely wrong, so my ads will be wildly off, but if Facebook allowed me to see all those selectors, a user like myself could go in and correct all those errors all at once, not one ad a time. I noted that Acxiom claimed that 37% of visitors to its data portal made corrections to their records, suggesting Facebook's users might similarly have substantial interest in this service.
Facebook acknowledged that its licensing agreement with the data brokers permitted it to offer such an interface, but that it would not comment on why it did not allow users to see all of the information it held on them. However, it did allow that the concept of such a page was a fascinating and fantastic idea that it would pass on to its product teams and actively consider. Though, a year later it appears the company decided otherwise.
When asked about all of the medical and mental health targeting categories offered by some of the data brokers it worked with, Facebook noted that its advertising policies currently prohibit running ads that explicitly target medical or mental status. Yet, when asked to commit on the record as a corporate vision statement that the company would not in the future change its rules to allow ads that target medical and mental status, especially in light of discussions in Australia to do precisely that with young children, the company declined to do so.
Putting this all together, we keep talking about Facebook as being some sort of frightening Orwellian surveillance machine that follows us everywhere knowing all and that its data is so accurate and powerful as to manipulate us to such a degree that it can throw a presidential election. The truth is that even Facebook, with everything it knows about us, has to buy a huge amount of behavioral and demographic profiling data about us from third-party data brokers that track us vastly more intrusively than it ever could and allows the company to follow us across the web and into our offline lives, recording our most intimate and private behaviors that we may go to great lengths to try and hide from our Facebook self. Facebook argues that it must buy this data because that is simply how advertising is done today and that companies want to use the same marketing selectors across every platform. Moreover, that Facebook’s own data about its users simply cannot offer the same level of richness and insight as the profiles owned by the data brokers, so it cannot just replicate their offerings using its own information. At the same time, at least in my case, the profiles held by two large data brokers across three of their offerings were comedically inaccurate, with more than three quarters of their profile entries wrong and many of the types of errors I saw were mirrored in the profile of another reporter.
Facebook announced last week that it is ending its licensing of data from these data brokers, though it provided no detail beyond saying it hoped it would “improve people’s privacy.” Is it that Facebook finally found a way to reverse engineer the categories offered by the data brokers using its own data? Is it that the extraordinarily error rates in my data broker profiles are not unusual and the company decided the data simply wasn’t of high enough quality for its needs? Or did Facebook decide that the “creepiness” factor of users seeing ads on Facebook based on a business transaction or interest they expressed off Facebook or entirely offline simply was simply too much of a public relations problem in today’s post-Cambridge Analytica narrative? We simply don’t know.
In the end, our relentless focus on Facebook is misplaced if our concern is about how unknown private companies are commercializing and benefiting from our data without our knowledge or informed consent. If we are concerned that our every action is being monitored and monetized by a labyrinth of businesses that have no obligation to tell us they have our data, no obligation to answer our questions about where they got it from, no obligation to let us correct it (though some do) and no obligation to delete it, then we should really be talking about the entire data brokerage industry, not just Facebook. After all, these are the companies that have such rich archives about us that even Facebook had to buy their data for the past half-decade and was unable to replicate the sheer reach and precision of their insights using only its own observations of us. Facebook knows only the perfect airbrushed us, while the data brokers know everything we do and even those that #deletefacebook can’t hide from the data brokers.
If the Facebook / Cambridge Analytica story is to be the start of a societal conversation about privacy and the right to control our digital selves and the ways in which we are bought and sold every day, we need to start talking about the data economy with the data broker industry at its center. After all, the privacy story isn’t about Facebook, it is about our right to own and control our digital selves in a data-driven world.
|
7b96f3b0500b8cbf703fd213e24f79ef | https://www.forbes.com/sites/kalevleetaru/2018/04/05/why-are-we-just-finding-out-now-that-all-two-billion-facebook-users-may-have-been-harvested/ | Why Are We Just Finding Out Now That All Two Billion Facebook Users May Have Been Harvested? | Why Are We Just Finding Out Now That All Two Billion Facebook Users May Have Been Harvested?
The Facebook logo. (Jaap Arriens/NurPhoto via Getty Images)
Facebook kept the privacy headlines going yesterday when it acknowledged that “malicious actors have also abused [the platform] to scrape public profile information… given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way,” while Zuckerberg himself offered that “I would assume if you had that setting turned on that someone at some point has access to your public information in some way.” In short, the company acknowledged what I’ve said many times before – likely the entirety of Facebook’s two billion public profiles (and quite a few private profiles) are archived in repositories all over the world by academics, companies and criminal actors, not to mention countless governments. The big story was not Facebook’s confirmation of this, but rather why the company took until yesterday to confirm it.
For years many like myself have warned of the sheer magnitude of Facebook scraping that is performed everyday across the world by academic and commercial interests (government surveillance is a whole different world into itself). Academics in particular have long harvested Facebook data in bulk with the full permission of their ethical oversight boards, frequently with US federal government funding from agencies like NSF and with the results published in top academic journals. These archives are almost never deleted and are frequently shared across the world, with mailing lists and conference sidelines filled with offers of bulk downloaded data.
Bulk harvested datasets frequently find their way from academia into commercial for-profit startup enterprises as universities increasingly encourage their faculty to commercialize their research, with surprisingly few institutions asking many questions about the data freeing flowing from their institutions to their faculty members’ side ventures. After all, the Cambridge Analytica story is at its core that of an academic allegedly making research data available for a for-profit company without ensuring all of the necessary permissions had been received for the transfer – a story that happens every day at universities all over the world.
To those familiar with academic and commercial data practices, Facebook’s revelation that potentially up to the entirety of its user community, all two billion of them, have had their public profile data harvested without their knowledge, is old news. The only surprising part is why Facebook is just now, in April 2018, acknowledging the scope of unauthorized data harvested and why it is focusing only on a narrow slice of that harvesting, rather than the myriad other forms of bulk harvesting that are used against its systems day.
Over the past year I have repeatedly asked Facebook for its stance on bulk harvesting and research use of its users’ data. Last February I asked the company if it had comment on the mass harvesting of data by commercial enterprises for political purposes and whether it had any policies prohibiting the use of personality quizzes or other apps that bulk harvested profiles. In June I asked it, in light of all of the ways Facebook itself was conducting research on its users, whether it might consider offering users the right to opt-out of having their personal data exploited by Facebook for research. In September, in the aftermath of the controversial “gaydar” study that claimed to be able to estimate someone’s sexual orientation from their photo and used a large volume of harvested Facebook data, I asked whether the work’s mass harvesting of profile photos was of concern to the company. Just last month I asked whether Facebook was planning to request that large holders of data harvested from the platform delete their archives or whether it planned to request that bulk Facebook datasets available for download be restricted to university researches and exclude commercial researchers. Not to mention countless other requests for comment about various Facebook research use of private user data. In every case the company’s response was silence.
If Facebook was so concerned about bulk harvesting and use of its users’ data, it certainly would seem that the company would have taken every opportunity to state that bulk harvesting, archival and commercial exploitation of private user data was something it was concerned about. It could comment that it was working to identify bulk harvesting, to request that companies and universities delete those archives or that it was asking that universities restrict access to the large harvested datasets they make available for download, limiting them to academic and not commercial uses. Instead, radio silence until the company lost control of the privacy narrative and suddenly decided now was the time to say it was shocked by how its data was being harvested and would take steps to reign it in.
Where was all this concern a year ago?
More to the point, in its statements yesterday, Facebook offered that its estimate of two billion profiles being downloaded was based on “the scale and sophistication of the activity we’ve seen.” Why was Facebook not monitoring its system logs from the beginning looking for bulk harvesting activity?
It turns out they were. Indeed, when the Obama campaign bulk harvested data from the platform, the company’s security teams immediately detected the bulk harvesting and approved it.
If Facebook was so easily able to detect the Obama campaign harvesting, why didn’t see all of these other harvesting efforts? Given that it was able to identify that up to its entire user community of two billion people have had their public profiles harvested at least once, it is clear the company did not lack the logging or analytic tools to identify such activity. The company did not respond to a request for comment.
In reality, it likely comes down to the fact that Facebook’s early years were defined as becoming a data hub, the plumbing around which they would remake the web in their image. By being open with their APIs and allowing harvesting of user data, they would become the invaluable must-have nexus of the evolving web. Twitter’s early trajectory was very similar, in which it made its firehose freely accessible and worked hard to become a central web nexus. Today Facebook has focused instead on building a walled garden which it wields total control over and focusing on bringing data into its platform, rather than letting it out. Twitter, too, has locked up its firehose to paying customers only and tightened its API policies.
In a call yesterday with reporters, Zuckerberg offered that “life is learning from mistakes” and that “we're an idealistic and optimistic company … we know now we didn't do enough to focus on preventing abuse and thinking through how people use these tools to do harm.” The problem is that when a platform that holds the digital lives of two billion people learns from its mistakes and naively believes there aren’t bad actors out there working hard to harvest its valuable data, then those two billion people lose their irreplaceable privacy in the process and face greater exposure to identity theft, bullying and other ramifications.
Putting this all together, the real story yesterday was not that all two billion of Facebook’s users may have had their public information harvested – that’s old news well known to those that study data use and privacy. The story was why Facebook waited until April 2018 to finally confirm it and why for the past year it has refused to step up and condemn the activities it is now saying are incompatible with its corporate vision. If mass commercial harvesting is wrong, why did it condone it in 2012 and why didn’t it forcefully denounce such behavior and take action to restrict and remediate it when asked repeatedly about it last year? In the end, it is nice to say that Facebook is learning from its mistakes, but in the real world there are real consequences to Facebook’s actions and as a platform, its reach and influence over society is so great that one must ask whether Silicon Valley's mantra of moving fast and breaking things is not the right mindset in a world in which the things being “broken” are people’s lives.
|
3c6073e3dfd9c1ea394dc18a83c0e110 | https://www.forbes.com/sites/kalevleetaru/2018/05/15/the-problem-with-using-ai-to-fight-terrorism-on-social-media/ | The Problem With Using AI To Fight Terrorism On Social Media | The Problem With Using AI To Fight Terrorism On Social Media
Photo illustration of ISIS imagery. (Jaap Arriens/NurPhoto via Getty Images)
Social media has a terrorism problem. From Twitter’s famous 2015 letter to Congress that it would never restrict the right of terrorists to use its platform, to its rapid about-face in the face of public and governmental outcry, Silicon Valley has had a change of heart in how it sees its role in curbing the use of its tools by those who wish to commit violence across the world. Today Facebook released a new transparency report that emphasizes its efforts to combat terroristic use of its platform and the role AI is playing in what it claims are significant successes. Yet, that narrative of AI success has been increasingly challenged, from academic studies suggesting that not only is content not being deleted, but that other Facebook tools may actually be assisting terrorists, to a Bloomberg piece last week that demonstrates just how readily terrorist content can still be found on Facebook. Can we really rely on AI to curb terroristic use of social media?
The same year Twitter rebuffed Congress’ efforts to reign in terroristic use of social media, I was presenting to a meeting of senior Iraqi and regional military and governmental leadership when I was asked for my perspective on how to solve what was even then a rapid transformation of recruitment and propaganda tradecraft by terrorist organizations. The resulting vision I outlined of bringing the “wilderness of mirrors” approach of counter intelligence to the social media terrorism domain received considerable attention at the time and outlined the futility of the whack-a-mole approach to censoring terrorist content.
Yet, Facebook in particular has latched onto the idea of using AI to delete all terroristic content across its platform in realtime. Despite fighting tooth and nail over the past several years to avoid releasing substantive detail on its content moderation efforts, Facebook today released some basic statistics about its efforts that include a special focus on its counter-terrorism efforts. As with each major leap in action or transparency, it seems increasing government and press scrutiny of Facebook has finally paid off.
As with all things Silicon Valley, every word in these reports counts and it is important to parse the actual statements carefully. Facebook, to its credit, was especially clear on just what it is targeting. It headlined its counter-terrorism efforts as “Terrorist Propaganda (ISIS, al-Qaeda and affiliates),” reinforcing that these are the organizations it is primarily focused on. Given that these are the groups that Western governments have put the most pressure on Silicon Valley to remove, the effect of government intervention on forcing social media companies to act is clear.
In its report, Facebook states specifically “in Q1 we took action on 1.9 million pieces of ISIS, al-Qaeda and affiliated terrorism propaganda, 99.5% of which we found and flagged before users reported them to us.” This statement makes clear that the overwhelming majority of terrorism-related content deleted by Facebook was flagged by its AI tools, but says nothing about what percent of all terrorism content on the platform the company believes it has addressed.
In contrast, the general public would be forgiven for believing that Facebook’s algorithms are vastly more effective. The New York Times summarized the statement above as “Facebook’s A.I. found 99.5 percent of terrorist content on the site, leading to the removal of roughly 1.9 million pieces of content in the first quarter,” while the BBC offered “the firm said its tools spotted 99.5% of detected propaganda posted in support of Islamic State, Al-Qaeda and other affiliated groups, leaving only 0.5% to the public.” In fact, this is not at all what the company has claimed.
When asked about similar previous media characterizations of its counter-terrorism efforts, a company spokesperson clarified that such statements are incorrect, that the 99% figure refers exclusively to the percent of terrorist content deleted by the company that had been flagged by AI.
Of course, there is little Facebook can do to correct the mainstream media’s misunderstanding and mischaracterization of its efforts, but at the least perhaps the company could in future request that outlets correct such statements to ensure the public understands the actual strengths and limits of Facebook’s efforts.
It is also important to recognize that the overwhelming majority of this deleted content is not the result of powerful new advances in AI that are able to autonomously recognize novel terrorism content. Instead, they largely result from simple exact duplicate matching, in which once a human moderator deletes a piece of content, it is added to a site-wide blacklist that prevents other users from reuploading it. A company spokesperson noted that photo and video removal is currently performed through exact duplicate matching only, while textual posts are also removed through a machine learning approach.
It is unclear why the company took so long to adopt such exact duplicate removal technology. When asked previously, before it launched the tools, why it was not using such widely deployed fingerprinting technology to remove terrorist content, the company did not offer a response. Only under significant governmental pressure did it finally agree to adopt fingerprint-based removal, again reinforcing the role policymakers can have on forcing Silicon Valley to act.
Thus, while Facebook tells us that the majority of terroristic content it deletes is now based on algorithms that prevent previously deleted content from being reuploaded and machine flagging of textual posts, we simply have no idea whether Facebook’s algorithms are catching even the slightest fraction of terroristic material on its site. More importantly, we have no idea how much of the “terrorism” content it deletes actually has nothing to do with terrorism and is the result of human or machine error. The company simply refuses to tell us.
It is clear, however, that even the most basic human inspection of the site shows its approach is limited at best. Bloomberg was able to rapidly locate various terroristic content just from simple keyword searches in a handful of languages. One would think that if Facebook was serious about curbing terroristic use of its platform, they would have made it harder to find terrorism material than just typing a major terrorist group’s name into the search bar.
When asked about this, a company spokesperson emphasized that terrorism content is against its rules and that it is investing in doing better, buy declined to comment further.
When asked why Facebook doesn’t just type the name of each major terrorist group into its search bar in a few major languages and delete the terrorist content that comes back, the company again said it had no comment beyond emphasizing that such material is against its rules.
Therein lies perhaps the greatest pitfall of Facebook’s AI-centric approach: the overreliance and trust in machines without humans verifying the results and the focus on dazzling technologically pioneering approaches over the most basic simple approaches that get you much of the way there.
All the pioneering AI advances in the world are of limited use if users can just type a terrorist group’s name into the search bar to view all their propaganda and connect with them to join up or offer assistance.
Yet, in its quest to find an AI solution to every problem, Facebook all too often misses the simpler solution, while the resulting massively complex and brittle AI has unknown actual accuracy.
It makes one wonder why no one at Facebook ever thought to just search for terrorist groups by name in their search bar. Does this represent the same kind of stunning failure of imagination the company claims was behind its lack of response to Russian election interference? Or is it simply a byproduct of AI-induced blindness in which the belief that AI will solve all society’s ills blinds the company to the myriad issues its AI is not fixing?
The company has to date declined to break out the numbers of what percent of the terrorism content it deletes is simple exact duplicate matching versus actual AI-powered machine learning tools flagging novel material. We therefore have no idea whether its massive investments in AI and machine learning have actually had any kind of measurable impact on content deletion at all, especially given a spokesperson’s emphasis that video and image content (which constitutes a substantial portion of ISIS-generated content) is deleted only through exact matching of material previously identified by a human, not AI flagging of novel content.
Thus, from the media narrative of pioneering new deep learning advances powering bleeding edge AI algorithms that delete 99% of terrorism content, we have the actual reality of the far more mundane story of exact duplicate matching preventing 99% of reuploads and perhaps some machine learning text matching thrown in for good measure. We have no idea what percent of actual terrorism content this represents or how much non-terrorism content is mistakenly deleted. The fact that the company has repeatedly declined to comment on its false positive rate while touting other statistics offers little confidence that the company believes its algorithms are sufficiently accurate to feel comfortable commenting.
In similar fashion, Twitter has repeatedly declined to provide detail on its own similar content moderation initiatives.
Most frightening of all is that we, the two billion members of Facebook’s walled garden or Twitter’s 336 million monthly active members, have no understanding and no right to understand how our conversations are being steered or shaped. As the companies increasingly transition from passive neutral communications channels into actively moderated and controlled mediums, they are silently controlling what we can say and what we are aware of without us ever knowing.
It is remarkable that the very companies that once championed free speech are now censoring and controlling speech with the same silent heavy-handed censorship and airbrushing that have defined the world’s most brutal dictatorships. Democracy cannot exist in a world in which a handful of elites increasingly control our information, especially as Facebook becomes an ever more central way in which we consume news.
Putting this all together, as Silicon Valley increasingly turns to AI to solve all the world’s problems, especially complex topics like hate speech and terrorism, they must ensure that their AI-first approach doesn’t blind them, whether those AI approaches are actually addressing the most common ways users might discover or engage with terroristic content and whether their AI tools are actually working at all. Companies should also engage more closely with the media that present their technologies to the public, ensuring accuracy in the portrayal of those tools, while companies could help by offering greater detail, including false positive rates. Instead of viewing the media as adversaries to be starved of information, companies could embrace them as powerful vehicles to help convey their stories and the very real complexities of moderating the digital lives of a global world. In the end, however, it is important to remember that we are rapidly embarking down a path of consolidation in which a few elites wielding AI algorithms now increasingly wield absolute power over our digital lives, from what we say to what we can see, turning the democratic vision of the early web into the dystopian nightmare of the most frightening science fiction.
|
c1343eaf61da867f7281e06727da7451 | https://www.forbes.com/sites/kalevleetaru/2018/06/18/facebooks-device-partners-and-how-nothing-has-changed-since-cambridge-analytica/ | Facebook's Device Partners And How Nothing Has Changed Since Cambridge Analytica | Facebook's Device Partners And How Nothing Has Changed Since Cambridge Analytica
Facebook logo. (Jaap Arriens/NurPhoto via Getty Images)
Earlier this month the New York Times reported that Facebook had provided highly privileged access to the social network’s platform to more than 60 device makers to allow them to build their own “Facebook experiences” in the era prior to smartphone apps became popular and that this access continued at least in part through earlier this year. Facebook pushed back on the report, arguing that the device makers were acting as extensions of itself, rather than as third parties. Making matters worse, one of those partners has been flagged by the US intelligence community as a national security threat. What can we learn from this latest revelation about Facebook’s approach to user privacy and security?
For a company whose founder was hauled before Congress to testify about a privacy scandal involving the lack of control users have over their content and a lack of transparency about Facebook’s partnerships with external entities that have access to user data, it would seem that the company missed an obvious opportunity to educate lawmakers and the public about the full extent of access it provides to others of user data. While the company briefly acknowledged the existence of its partnerships program, it had provided little detail until the Times' report.
In its public statements in the aftermath of the Cambridge Analytica scandal, Facebook has repeatedly emphasized to lawmakers and the general public that it has heavily restricted third-party access to user data, cutting off the enhanced access available prior to 2015. Yet, in reality, the company was still providing highly specialized API access to its device partners.
The company’s argument is that since device makers are helping encourage users to engage with Facebook’s platform by offering the “Facebook experience,” they are extensions of itself, rather than third parties and thus the enhanced access did not violate its consent degree or contradict its other statements.
This recasting of external companies as extensions of Facebook rather than as third parties is a reminder of the myriad contractors and subcontractors that work for major Silicon Valley companies (and indeed all of Corporate America) and that require extensive access to sensitive private user data in the course of their business. Yet, casting device makers as extensions of the company is a concerning interpretation that essentially allows Facebook to make any company anywhere in the world part of itself for the purposes of complying with legal or policy requirements that select user data never leave the company’s hands. In essence, if Facebook wants to share data with a third party it can simply define that company to be an extension of itself and voila, problem solved. It would seem lawmakers should take such interpretations to heart as they craft future legislation, explicitly defining what constitutes a third party and codifying into law that any handoff of user data to an entity not directly owned by Facebook itself constitutes a third party transfer.
The Times found that even after users set their privacy permissions to prohibit sharing of their data with third parties, Facebook permitted device makers to access that data, confirming the behavior on a BlackBerry Z10 device. Again, the company argued that such sharing was not a violation of those security settings, since it considered BlackBerry to be an extension of itself for the purposes of providing a Facebook experience rather than a third party.
Restricting access to users’ devices at least narrows the footprint of where user data is stored, yet the Times reports that Facebook confirmed that some of its partners did store user data, including the data of those users’ friends, on their own servers outside of Facebook’s control, but the company emphasized that it had legal agreements governing what the companies could do with all of that data.
Given that Facebook allegedly had an agreement in place with Cambridge Analytica to delete its user data which was allegedly not fulfilled, it is unclear how Facebook verified that its device partners did not make any unauthorized use of the user data they stored on their own servers and most importantly how they secured that data from unauthorized access and breaches.
Asked specifically about its partnerships with Chinese device manufacturers and how it ensured that no user data was downloaded to their servers, a company spokesperson offered a prepared statement that “given the interest from Congress, we wanted to make clear that all the information from these integrations with Huawei was stored on the device, not on Huawei's servers.”
However, asked how Facebook had confirmed that no data was stored on its Chinese partners’ servers, the company stated that it had reviewed its API logs and found no suspicious accesses from those partners. When asked again how it had confirmed that its partners were not simply copying data from user devices back to their servers, which would not appear in its API logs, the spokesperson reiterated that it simply trusted its partners. Asked whether it had conducted any forensic audits of devices in the wild to confirm they were not transmitting user data back to the manufacturer’s servers in unauthorized ways or whether it had conducted audits of any kind of its device partners, the company confirmed that it had not, that it merely trusted that all partners adhered to its rules. It is worth noting that in the aftermath of the Cambridge Analytica scandal, Facebook’s response was that it had always simply trusted that when companies made promises to it about user data that they would honor those promises and that in the post-Cambridge Analytica world it needed to audit and verify companies that accessed user data in any way. It appears those statements were merely public relations assurances to calm a nervous public and stop action from policymakers, rather than guiding principles.
Facebook declined to comment when asked whether it planned to add any security settings that would permit users to deny access to their data to all third parties, including those third parties that Facebook considered extensions of itself. It also declined to comment on whether it would allow users to see the list of external partner companies that had accessed their data in any way in the process of providing a Facebook experience.
Asked about facial recognition and whether the company’s security setting that allows users to request that Facebook not perform facial recognition on them extended to its third-party partners, the company also declined to comment. Given the company’s contorted definitions of what constitutes itself versus its extension partners versus ordinary third parties, it is concerning that the company would not make a statement for the record that a user’s refusal to allow Facebook to conduct facial recognition on them extends to the myriad contractors and subcontractors that the company works with.
Facebook also declined to comment on whether it would ever consider publishing a complete list of all companies, whether viewed as third parties or as partners or extensions of itself, that have ever held user data of any kind.
Putting this all together, in the end, it seems little has changed in Facebook’s approach to user privacy and transparency in how it uses the data that its two billion members entrust it with, turning to contorted definitions to argue that third parties are actually extensions of itself, rather than third parties under the law, releasing the minimum possible information about how it shares user data with outsiders and failing to conduct the “trust but audit” investigations it has promised policymakers and a concerned public. For its two billion users, there is nothing for them to do but wait for the next Cambridge Analytica and the next round of apologies from Mark Zuckerberg assuring them there is nothing that could have been done.
|
4bf6ca23c590e93bd8d7e54e93b9a046 | https://www.forbes.com/sites/kalevleetaru/2018/06/19/verizon-reselling-its-customers-locations-is-a-reminder-we-are-all-just-data-for-sale/ | Verizon Reselling Its Customers' Locations Is A Reminder We Are All Just Data For Sale | Verizon Reselling Its Customers' Locations Is A Reminder We Are All Just Data For Sale
Shutterstock
Earlier today Verizon and AT&T both pledged to largely stop reselling their customer’s real-time mobile phone locations to third-party commercial data brokers, though both companies noted that they would not entirely stop selling customer location data. That the nation’s top cellular phone networks have been quietly reselling the real-time locations of their customers to the shadowy world of data brokers, making it possible to track anyone in the country like a police-issue GPS tracker is an unfortunately stark and dangerous reminder that the Facebook - Cambridge Analytica scandal is a relatively small drop in the ocean compared to the myriad ways our most sensitive personal data is bought and sold each day in ways that could seriously endanger our lives. Is there anything we can do about it?
In 2016 I covered how advertisers could use Verizon’s Smart Rewards and Verizon Select programs to precisely geotarget ads to Verizon’s mobile customers based on their physical movements, by tracking their cell phones. This program, however, only allowed advertisers to geotarget ads, not bulk download the real-time movements of anyone in the country. It turns out there was far more to Verizon’s commercialization of its users’ private locations than was apparent.
Real-time high-resolution location data is a particularly dangerous datapoint for companies to sell given that it can be used by bad actors for everything from stalking to committing robbery, rape and murder or by foreign actors to map out national security. Whether identifying when a target is waiting at a bus stop alone or simply mapping her daily pattern of life to determine her most vulnerable moments in a typical day, location information is extremely sensitive and one of the classes of private data that most connects the digital world to our physical one.
As I wrote extensively during the Facebook – Cambridge Analytica scandal, the real story that we haven’t been talking about is the myriad ways in which private companies buy and sell our data every day without us knowing. Whether its Walgreens leveraging a provision of HIPAA to commercialize our medical data without us realizing or the almost uncountable number of brokers that buy our data from almost every company and government agency we interact with each day and reselling that data to others, a large fraction of every datapoint ever created about you by a government or private entity is likely held by the data brokerage industry today and resold every day, making money on you, without you having the legal right to even know its happening, let alone have any right to stop it.
For anyone who thought their Facebook data being resold was bad, take a refresher course on the data brokerage industry to be truly frightened.
It turns out one of the datasets that were for sale to these brokers was our real-time location information, courtesy of our cellphone providers.
While Verizon’s legal agreement with its users grants it the right to resell their data without notifying them or permitting them any control over that resale, it raises the question of whether it is truly informed consent to bury a lawyerly clause deep in a lengthy legal contract that few users read and that is filled with complex legal terminology that few users can likely understand. As a Verizon user myself, I was certainly not aware that Verizon was reselling my data to third-party data brokers and I have never granted it informed consent to do so. The legal right to do so may have been buried in a massive contract that I had no choice but to sign to use my cellphone, but I never knowingly granted the right to the company to sell my data and certainly have never seen any prominent displays or notifications from it that clearly spelled out in plain English that it was reselling my data and giving me an opportunity to order it to stop.
I asked the company whether it felt that most of its users were aware of this program and that it was reselling their location data and if so, what metrics it had to support that conclusion and if not, why it felt it was acceptable to resell its customers’ data without them understanding what was happening.
I also asked it whether it planned to notify customers of all of the companies their data was sold to and to allow them to see a list of what companies their data is being sold to in the future. Perhaps they could even opt out entirely?
Given that Verizon told the Associated Press that it intended to continue selling user data to selected parties, I asked whether it would publish a list of companies it planned to continue selling data to. The company argued to the AP that it was important that it continues to sell its user data to help “beneficial services” like preventing fraud and assisting stranded motorists.
To get a sense on just what “beneficial services” the company planned to continue to sell its data to, I asked Verizon if it would provide a list of the applications it deemed acceptable to continue to sell user data for.
Most importantly, however, I asked the company whether it would place a hard legal prohibition in its contracts with all data purchasers that explicitly banned them from using user data for any purpose other than those specific narrow use cases.
Unsurprisingly, a company spokesperson declined to comment on any of these questions, offering instead two quotes from its letter to Senator Wyden stating that customer privacy is of tantamount importance to it.
Much as Facebook waited until the Cambridge Analytica scandal to conduct a review of how user data was being used and identify concerns, so too did Verizon note that it had now conducted a review of its resale of customer location data and identified “a number of internal questions about how best to protect our customers’ location data” and that it would no longer enter into resale agreements “unless and until we are comfortable that we can adequately protect our customers’ location data.”
This raises the question of why Verizon waited until now, in the aftermath of a privacy scandal to conduct such a review, instead of constantly monitoring its program and proactively identifying these issues long ago? More to the question, why is Verizon only now offering that in the future it will no longer enter into agreements that it feels can’t protect its users’ data? Why wasn’t that the policy to begin with?
Given that Verizon's advertising program placed no restrictions on the areas that could be geofenced, allowing advertisers to target visitors to sensitive facilities like the CIA or undisclosed intelligence facilities, it would be interesting to explore whether foreign powers were purchasing the Verizon data to track visitors to known US Government intelligence facilities and then follow them throughout the country to identify all of the unknown sensitive facilities and to generate a catalog of US intelligence personnel that could be used to track them abroad.
Much of the privacy conversation about Facebook in the aftermath of the Cambridge Analytica story has been that if the company would just make a paid version of its website, it could eliminate ads, tracking and data targeting of those customers, allowing its two billion users to “buy” their privacy. Apple went as far as to emphasize that free services make the customer into the product and implicitly, that by buying a product, you're buying your privacy.
The problem with this utopian vision is that, as Verizon has taught us, even if you pay for something, you may still be surveilled and your most intimate private data be resold as an additional profit maker. In our modern surveillance society, just because you’re paying for something doesn’t in any way mean you’re buying your privacy and won’t be surveilled like a free ad-supported service.
Putting this all together, it seems in our surveillance society there is no way to escape being bought and sold - if its free, you’re likely paying for it by being repackaged and resold as data, but even if you’re paying for it, you’re still probably being resold, but paying for the privilege.
|
87ce9616995a919360b051b8c2c891d4 | https://www.forbes.com/sites/kalevleetaru/2018/06/19/verizons-lesson-that-you-cant-buy-your-privacy-and-what-it-means-for-facebook/ | Verizon's Lesson That You Can't Buy Your Privacy And What It Means For Facebook | Verizon's Lesson That You Can't Buy Your Privacy And What It Means For Facebook
Surveillance camera. (Omar Marques/SOPA Images/LightRocket via Getty Images)
In the aftermath of the Facebook–Cambridge Analytica story, Sheryl Sandberg and Mark Zuckerberg both alluded to the idea of a paid version of Facebook where users could purchase an ad-free experience. Much of the public and tech press equated such a paid ad-free experience as implying a surveillance-free experience. In their eyes, if there are no ads, there is no surveillance, but just because you’ve paid to hide the trackers doesn’t mean you’ve paid for them to go away. Could they simply lurk beneath the surface, tracking your every move to be monetized, but simply hidden out of the way behind the one-way mirror instead of right in front of you? Perhaps Verizon’s user location data resale could help shed some light.
It is perhaps one of the greatest falsehoods about our modern web, repeated so often and by such luminaries that it has become almost accepted fact: if you’re not paying for the product, you are the product and that you do pay your privacy is protected. The problem with this mantra is that it implies that by paying for a product, you are somehow purchasing your right to no longer be a product, rather than merely paying for the privilege to be surveilled.
The rise of paid cable television should have been enough to remind us that just because something that used to be free is repackaged as a paid product doesn’t mean that you’ll no longer be monetized. On the contrary, despite paying a hefty premium, all those ads and efforts to track you will still be there. Instead of ads being the Faustian bargain that makes the TV you enjoy available, you are now paying for the privilege of having to sit through them.
Today’s commercial web is built upon the idea that privacy is something of value and that by bartering it away, companies can generate sufficient value from us to warrant granting us free access to services, many of which are designed to help encourage us to give yet more of our privacy away.
The idea of bartering our privacy for free stuff is what has largely led to the false narrative that ad-supported services turn us into a product and that by paying for the service we are no longer the product. In a capitalist society, what company in its right mind would let its customers off the hook just because they’re purchasing its products? All that customer data is of substantial value in the right hands and what’s the point in having customers if you can’t make an extra buck off selling their data? The problem isn't the ad-supported web, it is the entrenched notion in the corporate world that customer data is something to be sold.
Perhaps the problem is that we can see ads, but the majority of us are entirely oblivious to the massive shadowy world of data brokers that buy and sell nearly every data point ever created about us every moment of the day.
Just because you have a commercial relationship with a company in no way means that company won’t resell your data. In fact, in today’s world, it is simply the accepted norm that any company or organization or government agency with which you do business will likely resell or make available your data in some form for someone else to profit off your information.
Whether it is your pharmacy commercializing your medical data, your grocery store selling your grocery list or your mobile phone company selling your location, really any company you do business with today is selling everything they can about you to the almost uncountable number of brokers that orchestrate all of this buying and selling, assembling vast dossiers on you that you have no right to see, let alone control.
To put it another way, the ad-supported web has become a lightning rod for the privacy debate because tracking ads are visible, but even when you pay a company for its products, you are surveilled and converted into data just as much. Either way, you lose and would you rather pay to be turned into data or get something free in the process?
Verizon’s resale of its customers’ location data shows us just how far companies will go to commercialize their users and that even extremely dangerous and sensitive data is fair game to be resold without users having the faintest idea what’s being done with their extremely personal information.
If users paying Verizon a hundred plus dollars a month are still having their data resold and commercialized, why would we expect that Facebook offering a paid ad-free version of its website would eliminate trackers and surveillance as part of the deal?
Moreover, Facebook’s entire algorithmic existence depends on being able to build rich profiles about all its two billion users to guide the information seen by themselves and their friends. Eliminating surveillance isn’t just a matter of flipping a switch – it would undermine all of the data streams that feed Facebook's algorithms and make its platform possible.
In short, even if Facebook offered a paid ad-free version of itself and even if that became an internet standard model adopted by all ad-supported websites, it is unlikely that “ad-free” would mean “surveillance free” and the new pay-for-access web would likely follow in the footsteps of the data brokers that came before, a mirror of Verizon’s “buy our product and we’ll make you into a product too.”
Putting this all together, in today’s surveillance society, purchasing a product no longer protects you from becoming a product yourself and the concept of “buying” privacy and the right to not be surveilled is now merely a quaint notion from a bygone day.
|
677837abddcfa833ea22d900d57ef4e6 | https://www.forbes.com/sites/kalevleetaru/2018/07/18/facebooks-automated-ad-labels-plus-facial-recognition-the-real-life-minority-report/ | Facebook's Automated Ad Labels Plus Facial Recognition: The Real-Life Minority Report? | Facebook's Automated Ad Labels Plus Facial Recognition: The Real-Life Minority Report?
A scene from the film 'Minority Report'. (20th Century-Fox/Getty Images)
While Microsoft called last week for government regulation of facial recognition technology, Facebook has been plowing ahead full steam in deploying its facial recognition algorithms to recognize every person on earth that it can legally and technically. With the passage European Union’s new General Data Protection Regulation in May, Facebook has even been able to expand its facial recognition to the EU, which it had been unable to do for years until this gift from the EU legislature, a reminder of the unintended consequences of legislation. Yet, between automatically flagging users as being interested in treason against their country to patent applications involving making its facial recognition available to private companies to hook into their surveillance cameras in public spaces, its time to ask, if Facebook is rapidly becoming the real life Minority Report?
If nothing else, the Cambridge Analytica story helped to educate the general public about the sheer magnitude and sensitivity of the data social media companies hold on them and just how little control those companies maintain over all of that data. It also helped to shed a bit more light, even for an instant, on the vast world of data brokers and the global landscape of companies that buy and sell our data in attempts to manipulate our feelings and actions.
Today Facebook has one of the richest collections of self-reported intelligence data in the world spanning more than a quarter of the earth’s population. Every day those two plus billion people pour into Facebook's servers live documentaries of their lives and the lives of those around them, from their colleagues and friends to perfect strangers, sharing with the company their most intimate and dangerous secrets for it to do as it sees fit.
The massive global intelligence collection machine that governments have dreamed of since the dawn of modern civilization has finally been created by one company that even succeeded in getting those two billion users to grant it the right to do what it wants with their data without restriction.
Based on the content you consume, the people you are connected with, the pages and places you interact with, the company’s shadowy algorithms watch you every moment of every day, assigning a wealth of labels to you estimating everything from your sexual orientation to how much you support your government.
Live in a country where homosexuality warrants the death penalty? Facebook will hoover up every data point it can acquire about you and watch your every move to guess your sexual orientation and provide that as a label on your account, ready for your government to issue a legal request to Facebook to get your name and arrest you. Live in a repressive regime where political opponents are regularly jailed or killed and harbor secret feelings of disagreement with your elected officials? Facebook has you covered there too, secretly assigning a label to your account listing you as interested in treason.
The company’s response when asked about these labels? That it will agree to remove the “treason” category, but will leave homosexuality and other labels, even in countries where those labels can bring the death penalty. Its reasoning? A spokesperson offered that “our goal is to ensure people see ads that are relevant and useful. It's better for the people using our service as well as for advertisers” and that there are legitimate advertising uses of the labels and “so, we’ll be keeping them” despite the immense harm they can bring to users.
When asked whether it would consider not assigning the labels in countries where such information could lead to the death penalty, the company reiterated that there are legitimate advertising uses of such labels even in those countries, such as advocacy groups and so it would be maintaining them despite the risk of death for its users there. In short, earning money from advertisers is more important than the life safety of users.
For all of the Chinese government’s efforts to create a social credit score for all its citizens, Facebook has the data to create such a ranking globally. Historically it portrayed all of this data as being used exclusively to target advertising, but the Cambridge Analytica story helped the general public realize just how widely it made their data available outside of its own servers.
To date, Facebook has not offered commercial services beyond advertising to monetize its vast wealth of user data. However, its recent patent filings paint a very different picture of its aspirations for the vast central archive of global human society that its two billion users have entrusted with it.
One patent application envisions Facebook selling a commercial service to retail companies to hook into their surveillance camera networks to perform facial recognition on every person walking through their stores, connecting them both to their real name and to every data point Facebook has about their interests. If that wasn’t Orwellian enough, the patent further envisions using all of Facebook’s information about that user and their friends to assign a “trust” score to each shopper that would be used to determine how they are treated in the store and what security measures might confront them.
In short, walk into a store of the future and from the moment you pass through the doors, the store knows exactly who you are, what your interests are and has assigned you a rating as to whether you should be trailed by security as a possible shoplifter or waltz through the store like a VIP. Of course, as with other such commercial rating systems, you would likely never have the right to request your score or have a wrongful score corrected.
This raises the even more frightening scenario of governments subscribing to such a service to perform realtime facial recognition of more than two billion people across every surveillance camera nationwide and using those same “trust” scores to identify low-trust individuals or those who have criticized their government and have them placed under police surveillance.
Imagine a country where homosexuality is illegal acquiring either through legal action or hacking from Facebook a master list of all its two billion users that it believes are homosexual and then tying that to its national surveillance camera network to automatically flag any of those individuals who have entered its borders and arresting them in realtime or simply stopping them at the border. Or seeing who their friends and family are and arresting them too.
Taking this a step further, imagine a future generation of targeted assassination smart weapons such as a fleet of armed drones powered by Facebook’s facial recognition software. A government could simply pull up a target’s social media profile and click a button to instantly deploy his or her facial recognition model to their drone fleet for assassination, deploying the drones to hunt for the person.
When asked about its future plans for the technology, a company spokesperson offered that the company often seeks “patents for technology it never put into effect and that patent filings were not an indication of the company’s plans.” Such “defensive patents” are extremely common in the technology world yet are troubling for the simple fact they show the company is thinking along such Orwellian lines.
Perhaps the company might commit to never using technology for evil?
When asked whether the company would go on the record as a corporate principle to promise never to offer facial recognition of its users as a service to companies or governments for any purpose, including surveillance or other intelligence or military applications, the company declined to do so.
It is particularly notable that while dismissing the patent filings as merely defensive patents that did not necessarily reflect the company’s future plans, it also in the same breath refused to completely rule out such applications as a corporate principle.
Putting this all together, Facebook has all of the pieces to create a real life version of the world of Minority Report, ranging from identifying citizens who harbor feelings of disagreement with their government to estimating those who might commit a crime in the future to the facial recognition needed to deploy a fleet of drones to locate or target them. The only question is what Facebook chooses to do next and whether it chooses to reign in how it monetizes its users or whether its blind ambitions take us ever faster towards an Orwellian dystopia.
|
b907dad042ea62b472ffd38a528fbd82 | https://www.forbes.com/sites/kalevleetaru/2018/07/20/facebook-as-the-ultimate-government-surveillance-tool/?sh=47675022909c | Facebook As The Ultimate Government Surveillance Tool? | Facebook As The Ultimate Government Surveillance Tool?
Photo illustration of the Facebook logo. (CHRISTOPHE SIMON/AFP/Getty Images)
Earlier this month it came out that among Facebook’s myriad algorithmically induced advertising categories was an entry for users whom the platform’s data mining systems believed might be interested in treason against their government. The label had been applied to more than 65,000 Russian citizens, placing them at grave risk should their government discover the label. Similarly, the platform’s algorithms silently observe its two billion users’ actions and words, estimating which users it believes may be homosexual and quietly placing a label on their account recording that estimate. What happens when governments begin using these labels to surveil, harass, detain and even execute their citizens based on the labels produced by an American company’s black box algorithms?
One of the challenges with the vast automated machine that is Facebook’s advertising engine is that its sheer scale and scope means it could never possibly be completely subject to human oversight. Instead, it hums along in silence, quietly watching the platform’s two billion users as Big Brother, silently assigning labels to them indicating its estimates of everything from their routine commercial interests to the most sensitive and intimate elements of their personality, beliefs and medical conditions that could be used by their governments to manipulate, arrest or execute them.
Such concerns are unfortunately far from hypothetical. I can personally attest that there are many governments across the world that very much aware of the potential of Facebook’s advertising tools for surveillance and indeed use them actively to track specific demographics and interests, using the company’s built-in reporting tools to identify geographic areas and demographics to target for further surveillance.
Today much of the governmental use of Facebook’s ad targeting tools revolves around using its publicly accessible targeting and reporting tools to understand things like which neighborhoods have the highest density of persons in a particular demographic that also have a particular interest of concern to the government. By running large numbers of parallel campaigns covering all of the permutations of a set of demographics and interests, governments can even learn which demographics are most associated with particular interests and which interests are most strongly correlated with particular demographics. Geographic reporting tools allow neighborhood-level identification of where those demographics and interests coincide, allowing surveillance resources to be increased in those areas.
The public availability of Facebook’s targeting tools means intelligence agencies need no court orders to leverage them, foreign intelligence services can use them to track and surveil on foreign soil and even local law enforcement agencies can use them with few restrictions. The global availability of Facebook’s advertising platform offers a particularly powerful and inviting tool for intelligence agencies attempting to map out adversarial nations, allowing them to better understand demographic and interest correlations and geographic affinities and guide the allocation of their own ground based resources.
In spite of their incredible power and public availability, Facebook’s ad tools are still a relatively blunt instrument compared to traditional individual-level surveillance tools.
As I alluded to earlier this week, what happens when countries in which homosexuality is a criminal offense that can potentially bring the death penalty use Facebook’s tools to target those communities? Using only Facebook’s public advertising tools, they can estimate popular neighborhoods and hangouts, correlated interests in those areas and so on, but they can’t readily compile a list of real names and addresses of everyone in their country that Facebook believes may be homosexual.
Given that homosexuality in some countries is classified as a crime under their formal legal code, could those countries use a court order to force Facebook to provide a list of all names of individuals in their country that its algorithms believe may be homosexual? The laws of many countries would make it difficult for Facebook to attempt to shield its users from a lawful request for a list of individuals suspected of committing what is in that country a serious crime.
Compounding matters, those individuals may have no idea that Facebook has identified them as potentially homosexual. They may take great care in all of their communications, friendship connections, likes, statements, status updates and all other online actions in an attempt to prevent the government from suspecting them. Yet, Facebook’s unyielding all-watching Eye of Sauron is not easily fooled and will likely eventually assign them a marketing label indicating its belief of their sexual preference based on the most nuanced patterns invisible to the human eye.
While Facebook agreed to remove its treason category due to its illegality in all countries (left unspoken was its limited marketing use which mean it likely was generating little revenue), a company spokesperson stated that the company would not be removing other categories that could place individuals at grave risk of arrest or death. Noting that homosexuality categories can be used by LGBTQ advocacy groups to reach people interested in those topics, the company said that they would not be restricting their use of homosexuality categories or any of its other sensitive topics categories even in countries where they are illegal.
When asked whether the company would at least consider limiting its application of sensitive categories in countries where they are illegal, such as not automatically labeling homosexual users against their will in countries where they could face the death penalty, the company offered that since marketers wanted to target those sensitive categories even when they placed users at grave risk of physical harm or death, “we’ll be keeping [them].”
It is remarkable that the company would not even consider placing the life safety of their users ahead of its marketing interests and that revenue generation is prioritized even when it has a very real possibility of leading to the death of those users. Such are the ethical and moral standards of today’s Silicon Valley.
This raises the question then of what Facebook would do when confronted with a formal legal request, such as public court order or a more secretive National Security Letter or similar, that ordered the company to hand over the names and IP addresses of all users that its algorithms believed were interested in certain topics or belonged to certain demographic groups.
When asked “has Facebook ever received a request from any government agency worldwide that asked it to provide a list of user names of accounts that had specific advertising interest labels associated with them” a company spokesperson replied that the company would provide that information to any government “In response to a legal request (like a search warrant, court order or subpoena) if we have a good faith belief that the law requires us to do so. This may include responding to legal requests from jurisdictions outside of the United States when we have a good-faith belief that the response is required by law in that jurisdiction, affects users in that jurisdiction, and is consistent with internationally recognized standards” and pointed to its data policy.
When asked whether the company had indeed received such requests and actually provided a list of names in response that its algorithms believed were interested in those categories, the company let its answer above stand.
Such a response is truly frightening, as it demonstrates just how clearly the central role Facebook is increasingly playing as a tool for law enforcement, intelligence agencies and repressive regimes to crack down on legitimate dissent or internationally recognized human rights. It also raises important questions about the company’s legal exposure if it knowingly assists a repressive regime track down and execute citizens based on internationally protected statuses.
Putting this all together, instead of bringing the world together, social media is increasingly helping to elevate the voices tearing it apart, while its international reach, massive centralized data warehouse and algorithms that can divine the most sensitive and intimate elements of our lives are likely to increasingly become a go-to one-stop shop for the world’s intelligence agencies to spy on and influence the world while governments themselves increasingly leverage their legal powers to force Facebook to help them hunt down dissent and those different from themselves. Welcome to a world even Orwell could not have imagined.
|
6b5379a6faf00b158acabf43154e5d81 | https://www.forbes.com/sites/kalevleetaru/2018/07/21/twitters-great-bot-purge-and-can-we-really-trust-active-user-numbers/ | Twitter's Great Bot Purge And Can We Really Trust Active User Numbers? | Twitter's Great Bot Purge And Can We Really Trust Active User Numbers?
Photo illustration of the Twitter logo. (Jaap Arriens/NurPhoto via Getty Images)
Twitter made headlines this month with the revelation that in the final three months of 2017 the company had suspended more than 58 million accounts, followed by another 70 million in May and June and continuing at the rate of more than a million a day. Despite strenuously denying past reports of the number of bots on its platform, Twitter appears to finally be taking the matter seriously, launching an active campaign to root out the vast infestation of malicious bots that have polluted its service. What does its failure to do so until now tell us about the platform and our ability to trust the insights we get from it?
Twitter’s problem with automated “bot” accounts has long been a source of contention between the company and outside researchers. Countless reports have cataloged the extent of bot problems on the platform only for the company to deny that the numbers are accurate, but refuse to provide precise numbers of its own or to work collaboratively with those researchers to publish what it believes are more accurate results.
When asked this past October why the company doesn’t do more to address its bot situation, especially why it doesn’t work more closely with the research community to help it identify likely bots in a collaborative rather than confrontational effort, the company never responded to a request for comment. When asked last August about whether it continued to stand by its official statistics on the percentage of its active users that are bots and whether it agreed with calls for it to submit its statistics to external auditing, the company declined to comment beyond pointing to its previous blog post on the matter.
Thus, it was entirely unsurprising that when asked earlier this month why the company was only just now finally acknowledging the full extent of its bot problem and performing a large-scale purge, the company did not respond. In the world of Silicon Valley, silence is often better than acknowledging you have a problem.
One of the greatest challenges of drawing analytic insights from social media is the lack of a normative baseline against which results can be understood. This hasn’t stopped an entire industry of social media analytic firms from cropping up that offer every imaginable analysis from social data, but without the ability to understand even something as simple as just how many of those voices we’re hearing from are real human beings, how can we even begin to trust the results we get?
A similar issue has long played out in the climate change space in which a temperature sensor that was formerly located far from human civilization in the midst of a forest clearing is now, after years of deforestation and residential expansion, marooned in the middle of an asphalt parking lot surrounded by tall concrete buildings. That sensor is still diligently reporting data, but the context in which its measurements may be understood is now totally different.
In similar fashion, social media is a chaotic and entirely fluid network of individuals and bots coming and going in which the underlying demographics and consistently of the signals it provides are ever changing. A sentiment score of “-10” might have indicated excessive negativity a year ago, but today actually reflects the baseline average of conversation on that topic. Without those underlying baselines, however, it is impossible to normalize these indicators to understand that that numeric value of -10 no longer means what it once did.
The same problem confronts those attempting to understand popularity on the platform. If ten million people all tweet about something, does that mean it is a popular topic? Ten million tweets ten years ago meant a lot more than ten million today. Ten million tweets today might still mean the topic is quite popular if those are all from humans but could also mean the topic is merely popular with spammers and disinformation campaigns if all ten million tweets are from bots.
For all the myriad metrics social media analytics companies offer today from these social firehoses, there is concerningly little care for how accurate those metrics are, a disinterest in creating baselines to normalize them against and a lack of recognition of the impact of bots and other non-organic accounts on those metrics, especially for contested topics.
Twitter itself seems to recognize the sensitivity of its user numbers. Its Developer Agreement and Policy has long included the restriction that users “Do not use, access or analyze the Twitter API to monitor or measure ... usage statistics ... including without limitation, monitoring or measuring … aggregate Twitter user metrics such as total number of active users, accounts, total number of Periscope Broadcast views, user engagements or account engagements.”
Put another way, Twitter makes it against its policy for any researchers to report how many active users or accounts there are on its platform.
Only the company itself is permitted to offer such numbers and they must be blindly trusted – any attempt to verify them yourself is considered a violation of its terms of use.
It would certainly seem that a company that cares about reducing the toxicity of its platform might start with banishing bad bots to ensure that the remaining conversation is organic and unamplified. Removing bots also removes a considerable amount of the cheap scaling factor of misinformation campaigns, dramatically raising their cost as nation states must rely on vast human teams like the Chinese model, rather than a small team amplified by a vast army of automated bots.
In the end, until Twitter speaks out about why it is finally taking the bot issue seriously, we will unfortunately never know what prompted this final acknowledgement and purge. The most likely scenario is that, as in all cases of Silicon Valley taking such issues seriously, the threat of government intervention finally forced the company to act. Will the bot armies simply change tactics and move to more expensive, but far more difficult to trace workflows? Absolutely. The cat and mouse game will continue, but as maintaining large bot armies becomes more expensive, it will also bring about changes in the ways they are used. Of course, this also raises the question of whether this is a temporary knee-jerk reaction to show lawmakers that the company is doing something, rather than a sustained and serious campaign. Only time will tell whether Twitter is serious about combating its bot problem and whether all of these efforts will make any difference. Welcome to our Orwellian online world which is increasingly becoming that “wilderness of mirrors.”
|
ba41fef8f12462a7d855e7158ccf1965 | https://www.forbes.com/sites/kalevleetaru/2018/10/10/no-google-isnt-trying-to-censor-the-web/ | No Google Isn't Trying To Censor The Web | No Google Isn't Trying To Censor The Web
Google logo. (Christophe Morin/IP3/Getty Images)
Earlier today an internal Google presentation summarizing a variety of perspectives, including my own, on the state of internet freedom began circulating on the web. The “leaked” presentation was quickly framed by some as a roadmap to censorship and that it demonstrated the company was examining how to suppress certain viewpoints or crack down on internet freedoms. Yet, a closer read of the presentation would suggest precisely the opposite: a company at the center of many of our debates about the future of the online world grappling with the existential question of the modern web: how to absolutely preserve freedom of speech, while at the same time preventing terrorists, criminals, repressive governments and trolls from turning this incredible force for good into a toxic and dangerous place that undermines democracy, advances terrorism, assists fraudsters and empowers hatred? How do we elevate the voices of the disenfranchised and give them a place at the table of global discourse, while not also awakening the trolls that seek to repress them? How do we empower the free expression of ideas and bring an incredibly diverse and divided world together, while embracing the differences that make us who we are? How do we reach across countries and cultures, across languages and landscapes, to have meaningful conversations about the future of our shared planet? Most importantly, how can technology play a positive role in helping facilitate the good, empowering civil discourse, while discouraging the bad, from terrorist recruiting to fraud to toxic speech and trolling?
As someone who writes and speaks extensively on the future of the web and how technology is both shaping society and being shaped by it, I am frequently contacted by organizations throughout the world seeking out my counsel as to where I see trends heading and my own perspectives on what I consider the best and worst approaches to society’s greatest challenges. Thus, it was not at all unusual when I was contacted last year by a research firm to interview me on behalf of an unspecified company on my views on the state of the web today. I had no idea the interview was for Google nor any idea of the broader scope of the research it fit into.
Reading the final report today for the first time alongside the rest of the web, my own take on it is very different than the framing that seems to have emerged in certain quarters. I see not a company charting a future of web censorship, but rather a company in its 20th year reaching out to experts across the world trying to make sense of what the web has become and what its own place should be in that future. To me it is extraordinary to see Silicon Valley actually listening, absorbing and reflecting on what the world is saying about the state of the web. This is the Valley as it should be – listening to its users and understanding the web from their vantage, rather than dictating its own vision for the future of our online world.
Stepping back and looking at the themes of the Google presentation, what one sees is essentially a summary of the state of the web today and the pragmatic reality that in the anarchy of the anything-goes free-for-all of the early web, the darkness began to eclipse the light.
In many ways the early web represented human society without rules, where the darkest corners of society were let loose to run free in a nightmarish dystopia. Terrorists, criminals, trolls, racists, sexists and just about any other form of hatred could run free, wielding their newfound anonymity and audience to unleash all the horror they had historically kept bottled up. Even otherwise ordinary leading figures of society, from politicians to journalists to scholarly elites, transform from Dr. Jekkyls to Mr. Hydes when their hands touch a keyboard, suddenly free to vent their rage at any issue of the moment for all the world to see. Who wants to privately rant to one’s spouse or friend when you can achieve overnight fame with the right viral post that hurts someone so badly they sign off social media for good?
Like any other public forum, the web represents an impossible duality between giving every person a voice and preventing the loudest and most violent voices from drowning out the rest. The web of today is akin to shoving a few billion people into a gigantic football stadium, crammed shoulder to shoulder, handing them each a megaphone and hoping that a peaceful and productive society emerges.
At the heart of Google’s presentation is the question of just what the role of technology firms should be in helping restore the web to a more civil and thoughtful place of meaningful discourse and enlightenment. Reading the presentation, I see not a call for censorship, but rather an existential question of how tech firms can maintain their absolute commitment to freedom of speech, while at the same time stopping hatred and violence. In short, how to give the voiceless a voice without seeing them instantly silenced, how to ensure the light is not blocked out by the dark?
In our hyperpartisan and divided world of today, we tend to see this question as one of politics and beliefs, a fight between censorship and freedom, rather than what it really is: a question of how to place enlightenment before emotion?
Much of this conflict lies in the path Silicon Valley has historically chosen to combat the dark side of the web: targeting ideas rather than the expression of those ideas. I have long argued that the path to a more thoughtful and inclusive web is to fight words rather than ideas. While we may find the views and beliefs of some to be abhorrent and antithetical to all we hold dear or may be confronted with those who wish us harm, the moment we allow emotions to overcome reason, when clinical evidentiary discourse becomes overwhelmed with profanity laden diatribes and threats of violence, we can never seek common ground. When we demonize others rather than seek to understand their grievances, even when those views do not permit our own existence, we lose our ability to see each other as sums of a great number of diverse parts, pieces of which with we cannot coexist and others which with we may find common ground. It is only by looking past our emotions that we see each other as fellow travelers through life, different than ourselves but part of our grand human society. Moreover, as we force society to base its conversations and conflicts on facts, rather than emotions and false narratives, we are able to diffuse many conflicts based on misinformation.
Of course, in a world of infinite information, we can each find “facts” to support our views. But, if the words we speak to each other are clinical and evidentiary, rather than emotional and threatening, we can at the least go our separate ways, rather than be chased by trolls down the information superhighway.
Targeting words rather than ideas allows us to cull the visceral and emotional attacks that dehumanize and destroy our ability to emphasize with others. Moreover, by avoiding targeting ideas, companies can avoid the slippery slope of government suppression that comes with the journey to censorship. After all, for every code of conduct that bans “terrorism” speech, there is a government ready to call any criticism of itself “domestic terrorism” that must be purged from the digital world.
In 2013, Google’s General Counsel David Drummond said it best when he called out the rising tide of state surveillance and censorship, noting that “Governments have learned in what might be the steepest learning curve in history that they can shape this global phenomenon called the Internet and in ways that often go beyond what they can do in the physical world and they’re doing so at an alarming pace.”
The threat of governments exploiting the harms of the online world to usher in a censored world built according to their needs comes out clearly in Google’s presentation, reflecting that internet companies are on the front lines of the desires of governments globally to censor and control the flow of information not only within their own borders, but to all countries, exporting their worldviews globally.
In the end, the threat of government intervention is perhaps the greatest reason of all for internet companies to act now, before the repressive regimes of the world take it over under the guise of “fixing” the web.
Putting this all together, as I read Google's presentation for the first time today with the rest of the web, I saw a very different picture from how many portrayed it. I saw not a company presenting a vision for global internet censorship, but rather one warning of the dangers such censorship would bring. I saw a company asking the most important question of all: how to preserve the freedom of the web while protecting it from the darkest corners of society and to do so before it is too late and governments take it over. Most reassuringly, I saw a company that was actually listening to the answers it received.
|
923c310b7dddb9064f8a84972b04ebad | https://www.forbes.com/sites/kalevleetaru/2018/10/23/even-the-data-ethics-initiatives-dont-want-to-talk-about-data-ethics/ | Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics | Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics
Two weeks ago, a new data ethics initiative, the Responsible Computer Science Challenge, caught my eye. Funded by the Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies, the initiative will award up to $3.5M to “promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” I was immediately excited about a well-funded initiative focused on seeding data ethics into computer science curricula, getting students talking about ethics from the earliest stages of their careers. At the same time, I was concerned about whether even such a high-profile effort could possibly reverse the tide of anti-data-ethics that has taken root in academia and what impact it could realistically have in a world in which universities, publishers, funding agencies and employers have largely distanced themselves from once-sacrosanct data ethics principles like informed consent and the right to opt out. Surprisingly, for an initiative focused on evangelizing ethics, the Challenge declined to answer any of the questions I posed it regarding how it saw its efforts as changing this. Is there any hope left for data ethics when the very initiatives designed to help teach ethics don’t want to talk about ethics?
On its surface, the Responsible Computer Science Challenge seems a tailor-built response to a public rapidly awakening to the incredible damage unaccountable platforms have wreaked upon society. The Challenge describes its focus as “supporting the conceptualization, development, and piloting of curricula that integrate ethics with undergraduate computer science training, educating a new wave of engineers who bring holistic thinking to the design of technology products.” The opening paragraph of Mozilla’s announcement offers that “Today, computer scientists wield tremendous power. The code they write can be used by billions of people, and influence everything from what news stories we read, to what personal data companies collect, to who gets parole, insurance or housing loans. Software can empower democracy, heighten opportunity, and connect people continents away. But when it isn’t coupled with responsibility, the results can be drastic. In recent years, we’ve watched biased algorithms and broken recommendation engines radicalize users, promote racism, and spread misinformation.”
Kathy Pham, the Mozilla Fellow who is the co-lead of the Challenge, summarized the initiative’s hope for influencing the future of the digital world with “Students of computer science go on to be the next leaders and creators in the world, and must understand how code intersects with human behavior, privacy, safety, vulnerability, equality, and many other factors.”
Ethical design of digital systems encompasses an incredibly broad array of topics and disciplines, drawing deeply from outside the narrow technical realms traditionally associated with computer science curricula. In a world in which software can be built to do almost anything a programmer might imagine, the real question today is not what CAN we build, but rather what SHOULD we build?
Lurking beneath all of the questions posed by the Challenge’s announcement, from biased algorithms to exploited communications platforms to personal data use and misuse, are the basic ethical tenants of what is acceptable behavior as we seek to document and correct these growing challenges to our digital world?
Specifically, when companies talk about a challenge like misinformation online, in public they describe it in abstract terms of “bad actors” and “bad messages” flowing through cyberspace. The problem is that in private, behind closed doors, companies must make hard ethical choices in their pursuit of mitigations and solutions. Fighting misinformation requires understanding it, which in turn requires mining the private personal information of users sending, consuming or engaging with it or actively altering the algorithmic filtering that forms the lens through which they see the world. Similarly, fighting suicide and self-harm is portrayed in public as a critical public good, yet behind closed doors, hard ethical choices have to be made regarding where the training data for those algorithms comes from and just how far to go in terms of actively manipulating a user’s emotions to push them towards or away a particular emotional state and whether such interventions, if done wrong, could backfire in tragic ways.
Is it acceptable to waive ethical considerations in our pursuit of a greater good? In our desire to understand misinformation or fight self-harm is it acceptable to rescind formerly sacrosanct ethical rules like informed consent and the right to opt out of research, in the name of the greater good?
These are not idle questions. Much of our modern ethical infrastructure exists because the medical profession once chose the same path that our digital disciples are taking today. In the name of the greater good of society, it was once deemed acceptable to experiment on innocent individuals without their knowledge or consent, without their ability to opt out, without them having any control over their personal information or its sharing. Over time as a society we came to agree that performing experimentation and sharing the personal information of the few, while it could bring great benefits to the world, was not worth the harm it did to those few. In essence, the good of the many did not outweigh the harm of the few.
Over time that medical ethical stance came to guide and govern other fields of study including behavioral research as it transitioned to the early days of digital data collection. Believe it or not, there was a time when the ethical review boards (IRBs) of universities used to reject studies for failing to obtain informed consent or for performing research with the potential of real harm on individuals without their knowledge, consent or ability to opt out. Such times are unfortunately increasingly nothing more than a quaint memory of times gone past.
At the end of the day, nearly every question of computational ethics, no matter how lofty or abstract, comes down to the reality of the three basic tenants of where data comes from, what can be done with it and whether it can be actively or only passively collected. In our digital world, all ethics questions are data ethics questions.
For example, the Challenge specifically calls out algorithmic bias, a topic that impacts everything from photo tagging to criminal justice decisions. Yet, studying algorithmic bias requires studying the data that was used to train those algorithms, can involve using new data to test the decisions of those algorithms and may require interacting with the algorithms in ways that can harm others, such as using fake data that impacts decisions those algorithms make about innocent third parties that can result in considerable economic, reputational and even life-threatening emotional and medical harm to individuals that never consented to becoming part of some researcher’s unauthorized experiment. In short, studying bias requires studying data and leads to those three basic tenants of data ethics.
At the same time, studies intended to draw attention to a particular area of data ethics can themselves cause unintended consequences, creating a roadmap for repressive regimes to leverage the work for the very purposes the researchers sought to prevent.
As raw data itself becomes the lifeblood of the modern digital world, more and more companies are built not around providing a neutral service like a word processor, but rather around the collection, exploitation and monetization of data, with services becoming merely portals through which to acquire and act upon such data.
Every single digitally-enabled object we interact with today, from our microchip-powered toaster to our internet-connected and camera-equipped television to our social media networks, all have the ability to collect reams of data about us. In the past, the focus was building a product, not collecting data. Televisions focused on giving us the best picture, toasters the best toast and word processors focused on making it as seamless as possible to write prose. Today, building that television involves a conversation around how many ways its cameras, microphones and internet connections can be used to profile its owner, that toaster increasingly phones home about when we eat meals and our culinary tastes, while that word processor builds a literary profile of how and what we write about. While each of these on the surface is a computer science ethics question, at their core, they are each questions about data ethics.
Speaking with young computer science graduates today about their startup dreams or their latest projects, it is the rare student indeed that spends the entire conversation focused on the problem they are trying to solve, rather than all the ways in which their new tool can collect monetizable data about its user. Indeed, it is not uncommon to hear a startup offer that even if its product fails, the user data it collects will be worth enough to sell to someone. Partially this is because we have as a society taught our future technical leaders that data monetization is the only path to success in today’s world, but also because academia is becoming increasingly detached from the tenants of privacy and right to self in the digital era.
This raises the question of whether it is simply too late for an initiative like the Responsible Computer Science Challenge to turn the tide and restore at least some semblance of a conversation around ethics to academia?
Students that work in faculty research labs or who continue on to graduate school will likely find themselves at some point in their careers asked to assist in the research or authorship of studies involving data collected without informed consent or which involves manipulating users without their knowledge or ability to opt out. While such research was formerly deemed egregious enough to warrant a statement of Editorial Concern and questions about whether it would be accepted for publication in reputable journals, today top academic journals are willing to accept such studies, with only one offering that it was a matter of “concern” but would not preclude publication and one no longer arguing that such work rose to the level of Editorial Concern.
Top funders, including US Government funding agencies, refuse to release information regarding the ethical considerations and justifications of work they support, while transparency-first funders pushing for open unfettered access to research outcomes stop short of demanding that such access should extend to the ethical justifications behind those studies. As new data initiatives launch, promising vast archives of data collected without informed consent and without the right to opt out, major funding agencies line up to support them without so much as a single public word of caution, while the initiatives themselves offer that every single question relating to ethics is “to be determined” down the road after it has figured out all the amazing things that can be done with the data.
Universities are increasingly waiving historical requirements on ethical data use, including permitting the use of stolen data resulting from previous hacks or breaches. Even those institutions that have traditionally enforced stringent secondary review on all research data use have not enforced those rules.
Students working in faculty labs or contemplating graduate school will therefore face pressure from all sides to participate or conduct research that violates traditional ethical norms. They will learn that the funding agencies that make academic research possible and the publishers that make or break academic careers have all embraced this shift in thinking about data ethics. A student today who makes a stand that they will never work with social media data nor any observational dataset that has not collected the individual informed consent of every single person for that specific research project will likely find that they are quickly at a substantial disadvantage in seeking funding or publication venues for their work, while their peers are able to acquire limitless datasets and funding and publish groundbreaking high profile papers that grant them considerable career success.
Given the amount of data-driven research that is increasingly occurring outside of the computer science, as nearly every discipline of study grapples with the impact of data availability, there is also the question of whether focusing so narrowly on computer scientists will have the necessary impact.
Moreover, attempts to improve one area of data ethics can lead to problems in other areas. Take the push for replication, itself a reaction to concerns over falsification of research results and inadvertent collection and analytic error. As the push for replication has grown, it in turn has come into conflict with the notion that users should be allowed to have some level of control over their own personal private information.
Meanwhile those students who head for the commercial world find that the companies that are increasingly at the heart of the public discourse around data ethics frequently have even fewer data ethics protections.
This all raises the question, then of how the Responsible Computer Science Challenge sees its effort as being able to turn back the tide on data ethics? Solving the kinds of issues it outlines, from algorithmic bias to exploitation of communications technologies by bad actors to mitigating misinformation campaigns to personal control of data all come back to those three core tenants of where data comes from, what can be done with it and whether it can be actively or only passively collected. An initiative focused on misinformation cannot escape the simple fact that at the end of the day everything it does hinges upon those three basic tenants of data ethics.
Could an effort like the new Challenge change how we talk about these topics and perhaps roll back the tidal wave of data ethics change that has overtaken academia’s hallowed halls? The Omidyar Network did not respond to repeated requests for comment. The Mozilla Fellow co-leading the challenge, Kathy Pham, offered only that these questions were in her view “out of scope” for the Challenge and neither she nor the Mozilla Foundation’s Editorial Lead responded to a request for more detail. Whether there is a failure to see the connection between abstract concepts like “algorithmic bias” and the very real ethical considerations facing those who actually study and combat such bias, or whether there is a failure to see the connection between fighting online misinformation and the ethical challenges of obtaining the training data and the impact of active algorithmic shaping of the information environment is unknown. But, perhaps that is the problem: the failure in computer science to see the connection between abstract ethical concepts and the very real and immediate data ethics challenges involved in studying and confronting them.
Either way, it is noteworthy that in a world in which data ethics has taken on such a role in the public discourse, that a promising and well-funded data ethics initiative that could actually have a real impact on the views of our future technical leaders, declined to comment when given the opportunity to present its vision for how it hoped to combat some of the largest data ethics challenges facing computer scientists today and the inextricable connection between ethics and data ethics.
At the end of the day, if even the data ethics initiatives won’t talk about data ethics, what hope do we have?
|
793e2685812cbdb047eba4cf2d9b89a3 | https://www.forbes.com/sites/kalevleetaru/2018/11/25/mapping-world-happiness-2015-2018-through-850-million-news-articles/ | Mapping World Happiness 2015-2018 Through 850 Million News Articles | Mapping World Happiness 2015-2018 Through 850 Million News Articles
A map of world happiness on September 12, 2018 as seen through the eyes of the world's news media... [+] through GDELT Kalev Leetaru
What might it look like to literally “map world happiness” over the past four years? To take more than 850 million news articles in 65 languages from every corner of the globe 2015-2018 and create a day-by-day animation that shows the average “tone” of worldwide news coverage about each location on earth, from very negative (bright red) to very positive (bright green). How might the cloud allow us to bring together mass machine translation, sentiment analysis, fulltext geocoding and visual document extraction with tools like Google BigQuery to offer a glimpse of how these emerging massive data analyses can allow us to peer inside the soul of global society?
In 2016 and 2017 I explored what it might look like to try and map global happiness through the eyes of the global media. The resulting annual maps reflected myriad stories of how the world had internalized the events of the year gone past. Yet, often the most interesting stories are told not through static images, but rather through the patterns of change over time. A brief burst of negativity in the aftermath of a natural disaster might be so short in duration that it is washed away by the rest of the year’s coverage when looking at an annual summary. In the day-by-day resolution of an animation, however, its instantaneous burst into existence is visible through a wash of red across the map, replaced by green just as quickly.
Creating a daily map of global happiness over nearly four years begins with an immense global dataset. My open data GDELT Project monitors worldwide news coverage in nearly all countries across 65 languages, updated every few minutes. It machine translates every article from those 65 supported languages into English and applies a vast array of natural language processing and deep learning algorithms to both the articles and their associated images. Each textual article undergoes textual geocoding, sentiment mining of thousands of emotions, entity extraction, topical coding, event extraction and a number of other processes.
Since the launch of its mass machine translation initiative in 2015, GDELT has monitored more than 850 million worldwide news articles in 65 languages spanning those four years. Its sentiment mining system has computed more than 2.3 trillion emotional assessments across thousands of distinct emotions from “abashment” to “wrath.” For the purposes of this analysis we will focus on its general purpose “tone” dimension that is specifically tailored for reflecting global news narratives, but we could regenerate this animation for any emotional dimension or topic imaginable.
GDELT’s geocoding system has identified more than 7.1 billion location mentions in those 850 million articles, resulting in more than 400GB of extracted geographic data. Using the power of the cloud, all of this processing occurs in realtime across the world in Google Cloud data centers in 16 countries.
How does one take 7.1 billion location mentions across 850 million articles totaling 400GB of geographic data and 850 million “tone” measures (out of 3.2 trillion total emotional assessments) and generate a final summarized analysis that captures the average positive/negative “tone” of each distinct location by day over four years as seen through the world’s news media?
All of GDELT’s data is stored in Google’s cloud-based BigQuery mass analytics platform. With BigQuery, it took only a single line of SQL and 2 minutes 14 seconds to process all of this data down into the final summary output totaling 70 million distinct location-day pairs that were rendered into the final maps.
The final animation can be seen below.
Within the 2 minutes and 52 seconds of the animation you can see almost four years of world history go by in a mesmerizing and almost artistic display. Look closely and you can spot countless major stories in the bursts of red and green that wash across the map.
Perhaps the most powerful story captured in these images is one of human resilience.
In the United States, look for Hurricanes Matthew, Harvey, Irma and Florence, each of which resulted in a wave of intense red in the expected and actual landfall and affected areas. Yet, despite widespread devastation, notice how quickly the tone of coverage of those areas returns to normal. Even in the worst tragedies, people search for the slivers of good they can hold onto, the lives spared rather than the possessions lost.
Of course, some areas remain consistently red, reflecting a mixture of consistently negative international coverage and negative biases of its own domestic coverage. In the case of India, even Google historically autocompleted the search “why is Indian media so” into “why is Indian media so negative?” Yet, it also reflects the biases of the world’s presses, that when we look to the world outside our own borders, that coverage is often tainted by myriad biases. In doing so, it also reminds us of how important it is to see the world through local eyes.
It is important to remember that everything is the animation above was generated by completely automated computer algorithms and reflect the aggregate tone of global news coverage, rather than the local emotions of those living in those areas. Visual document extraction, machine translation, sentiment mining and fulltext geocoding all involve extraordinarily complex algorithms and the images above necessarily reflect a certain degree of error, but allow us to see the world at scales we could simply never before imagine.
Putting this all together, the animation above is a powerful example of how the modern cloud makes it possible to take a crazy idea to “map world happiness” and turn nearly a billion news articles into data and then with a single line of SQL turn those trillions of datapoints into a beautiful map. Ultimately, it shows how the almost limitless scale of the cloud and brute force power of cloud analytics platforms like BigQuery can harness planetary scale data sources like GDELT to allow us to peer into the very soul of global society and understand what makes us human.
I would like to thank Google for the use of Google Cloud resources including BigQuery.
|
bfa562b6b5ae1d67e3bef356eec49c7c | https://www.forbes.com/sites/kalevleetaru/2018/12/05/facebook-still-doesnt-understand-what-privacy-means/ | Facebook Still Doesn't Understand What Privacy Means | Facebook Still Doesn't Understand What Privacy Means
Facebook logo. (JOEL SAGET/AFP/Getty Images) Getty
In response to the release earlier today of the Six4Three collection of internal Facebook emails, Mark Zuckerberg issued a statement largely downplaying the significance of the emails that the company had fought so heavily to keep secret. His statement largely acknowledges Facebook’s focus on generating revenue and the fact that as a for-profit company it must be “economically sustainable.” One line in his statement, however, is striking for the way it juxtapositions Facebook’s perceived role in the online ecosystem compared with traditional commercial cloud vendors.
Midway through his missive, Zuckerberg offers a defense of his internal emails regarding the idea of quite literally selling access to user data: “we decided on a model where we continued to provide the developer platform for free and developers could choose to buy ads if they wanted. This model has worked well. Other ideas we considered but decided against included charging developers for usage of our platform, similar to how developers pay to use Amazon AWS or Google Cloud. To be clear, that's different from selling people's data. We've never sold anyone's data.”
The actual email in question, dated October 7, 2012 and authored by Zuckerberg himself, offers that “I've been thinking about platform business model a lot this weekend ... if we make it so [developers] can generate revenue for us in different ways, then it makes it more acceptable for us to charge them quite a bit more for using platform. The basic idea is that any other revenue you generate for us earns you a credit towards whatever fees you own us for using plaform [sic]. For most developers this would probably cover cost completely. So instead of every [sic] paying us directly, they’d just use our payments or ads products. A basic model could be: Login with Facebook is always free. Pushing content to Facebook is always free. Reading anything, including friends, costs a lot of money. Perhaps on the order of $0.10/user each year.” He goes on to clarify that “For the money that you owe, you can cover it in any of the following ways: Buys ads from us in neko or another system. Run our ads in your app or website (canvas apps already do this) Use our payments Sell your items in our Karma store. Or if the revenue we get from those doesn’t add up to more that the fees you owe us, then you just pay us the fee directly.”
To Zuckerberg, it seems that “selling” user data is narrowly defined as offering up ZIP files for download priced per user, where the “product” being “sold” is a boxed-up copy of the data of a given set of users. Instead, bartering access to user data by providing data in return for monetizable benefits, such as licensing of a trademark in return for extended access, is not “selling” in Zuckerberg’s mind. Neither is the idea of requiring a developer to purchase a certain amount of advertising in return for getting access to data in a sort of “subscription fee” model where payment is made for membership rather than per file. Nor, apparently, does Zuckerberg see the idea of charging directly for access to each individual user file to be a form of “selling” so long as the access being sold is to be used in providing said user a better experience with that Facebook-connected application.
Zuckerberg is right to a degree in that a court of law might not consider these situations to rise to the level of “selling” data, but rather things like “bartering” or “subscription membership” fees or even “access” or “integration” fees.
To the average Facebook user, however, whether their data is being traded for something else of value, included as part of a bulk subscription or charged individually, such distinctions mean little. Their data is being “sold” no matter how you wish to twist your semantics.
It is also true that Facebook apparently never actually put these plans into effect. Though here again we have nothing more than the company’s words and very carefully constructed semantics. After all, given his statement, would Zuckerberg consider “bartering” to be a form of “selling?” It would be far more reassuring if the company would state for the record that beyond directly transactional advertising fees, it has never received anything, whether of value or not, in return for access of any kind to user data.
However, a more interesting aspect of Zuckerberg’s statement from earlier today was his comparison of Facebook’s developer platform to traditional commercial cloud computing services like Amazon AWS and Google Cloud. In Zuckerberg’s telling, cloud vendors like Amazon, Google, Microsoft and others rent their computing hardware for developers to run their applications on, so Facebook is no different.
In reality, however, developers who use Amazon’s cloud don’t get access to Amazon’s marketplace data, nor developers on Google get access to its web indexes or Microsoft access to Office data. Developers choose those clouds based on their hardware and software environments, not access to data. With the commercial cloud you bring your own data, they merely rent you the hardware and software tools to use it.
In stark contrast, developers don’t choose Facebook’s application platform to gain access to specialized hardware or software tooling or AI systems to build systems entirely unrelated to the social media experience. A manufacturer isn’t likely to choose Facebook as the best cloud vendor to maintain their warehouse inventories or store their customer credit card data.
Developers choose Facebook’s platform to gain access to Facebook’s data.
To put it more directly, developers choose Facebook’s platform to gain access to the personal data of its two billion users which it holds in trust for them.
Seen in this light, Zuckerberg appears to be rationalizing the idea of charging access for user data by seeing the transaction as renting hardware that just happens to come preloaded with two billion people’s data. In essence, a commercial cloud company like any other in which developers are paying to rent hardware, but where the benefit is not specialized hardware capability or software systems, but rather unique access to the social media data of its users.
This is a particularly intriguing framing as it would allow Facebook to rationally and legally argue that the product of value being “sold” is indeed “hardware” rather than “data.” To its two billion users, however, such distinctions are meaningless.
More importantly and troublingly, however, this idea of Facebook as a cloud provider that just happens to come preloaded with user data, represents an ultimate commodification of its users as nothing more than utility data points. A sales pitch based around each user account representing the intimate and very personal life of a very real physical human being and setting pricing accordingly would escalate the concept of “selling” data but would at the same time acknowledge the immense value that that individual likely places on their personal life.
Instead, by treating user data as merely part of a bulk package deal as part of a hardware rental contract, Facebook shows a frightening disregard for just how personal and intimate that data is that it is simply tossing over the fence to its developers.
In this light it is almost remarkable that Cambridge Analytica became the exception, rather than the norm.
It is also interesting that given Facebook’s emphasis on growth at all costs captured in these emails, that it did not view developers as merely extensions of its own programming staff. After all, developers create applications that drive traffic or content to Facebook and which help to keep them engaged while they are there, luring them back and upping the total ad surface and duration to monetize.
Zuckerberg’s framing in both 2012 and today articulating developers as either profit or cost centers, rather than extensions of its own developer community is intriguing.
The company did not respond to requests for comment on the emails, though in public statements it has argued that the emails do not provide the whole context for its conversations, while declining to clarify what it believes is missing or to provide that absent context.
Putting this all together, Zuckerberg’s rationalization earlier today of those 2012 discussions by envisioning Facebook as a cloud computing provider that just so happens to toss user data into the mix is at once a hopefully naïve rendering and simultaneously an Orwellian commodification of the most personal and intimate digital lives of its two billion users. In the end, perhaps most of all it reminds us just how little we mean to Facebook and that after all these privacy scandals, it still doesn’t understand privacy.
|
bc1ca9cef684da92e3a7dc4603d03dbe | https://www.forbes.com/sites/kalevleetaru/2018/12/05/we-really-are-just-data-for-sale-in-facebooks-eyes/ | We Really Are Just Data For Sale In Facebook's Eyes | We Really Are Just Data For Sale In Facebook's Eyes
Facebook logo. (JOEL SAGET/AFP/Getty Images) Getty
The release today by the British Parliament of the trove of internal Facebook emails from the Six4Three litigation offers an incredibly rich and unvarnished look at Facebook during a period when user growth appeared to outrank any consideration over user privacy or safety. While the company has long outwardly projected an image of a benevolent public works project to connect the world and a careful and trustworthy steward of the intimate information entrusted to it, its internal emails and decisions laid bare in the email trove capture a quite ordinary commercial company bent on growth at all cost, discussing hiding its collection practices from users and even the possibility of directly selling access to user data. The only question now is whether policymakers across the world will reign in Facebook before it is too late.
The Six4Three emails offer a glimpse into the innerworkings of one of the most powerful and influential technology companies that today controls what nearly a quarter of the earth’s population sees and says online.
While the company has been quick to note that the emails provide only a slice of its internal discussions and specifically revolve around conversations relating to the issues at the core of the Six4Three case, they nonetheless offer a wide-ranging look at how the company viewed the dueling tensions of increasing growth versus protecting its users.
Perhaps most noteworthy in all of the conversations captured in the emails is the lack of concern for users themselves. The word “safety” never makes an appearance. The word “privacy” appears only in the context of retroactively addressing the “privacy risks” of an update that was already preparing to ship to mass harvest user call and SMS logs on Android, as well as a mention in the context of encouraging users to share using more permissive privacy settings.
Even while the conversations at hand dealt primarily with partnerships, revenue and other business-related matters, rather than issues relating specifically to the philosophical vision of a company that aimed to connect the world, it is extraordinary that for a company that has portrayed itself so heavily as a benevolent public good, that so little attention is paid to the issue of what is best for users rather than what is best for the company.
One stunning revelation in the emails is that the company actually entertained the idea of quite literally selling user data. None other than Zuckerberg himself in October 2012 offered the idea that “Pushing content to Facebook is always free. Reading anything, including friends, costs a lot of money. Perhaps on the order of $0.10/user each year.”
Developers could “pay” for the data by running advertisements or selling goods and whatever revenue was left over would be paid directly to Facebook in cash.
While the company denies that it ever implemented this idea with any of its partners, the very idea that the company’s most senior leadership would actually even consider it worthy of writing in a strategic vision email as a serious possibility for the company is nothing short of extraordinary.
Of course, even here we must simply blindly trust the company that it never sold user data in any fashion. There is no external neutral third party that can attest to Facebook’s statements.
Moreover, as the emails show, the company has gone to great and contorted lengths to structure arrangements that its subsequent public statements described truthfully, but incompletely. It is specifically noteworthy that in its denials the company has maintained that it did not “sell” user data, rather than a more encompassing denial that it has never structured an agreement outside of advertising sales that benefited it economically from its user data.
In short, we have no way of knowing whether Facebook has crafted other creative means to benefit from user data that it has not revealed to date and that would not legally constitute directly “selling” data, such as bartering.
Even without selling user data, there are discussions in the emails about cutting off access for companies that did not purchase a large amount of advertising on the platform, showing the number of different ways the company explored monetarily profiting directly from user data beyond purely advertising. In another case it discussed trading extended privileged access to user data in return for a license to use another company’s trademark, in essence bartering the data.
The emails show a company that was not afraid to leverage its privileged access to user’s private lives to identify acquisition targets and crush competition. Zuckerberg himself approved terminating competitor Vine’s access to prevent it from growing into a more formidable threat to Facebook’s business.
Facebook even went as far as to use a VPN application it had acquired to secretly monitor large-scale user habits to identify what companies to purchase, including WhatsApp, while prioritizing competitive threats to be destroyed.
Despite assuring the public and policymakers that users are fully aware of the rights they grant Facebook to surveil and exploit their personal data, the emails also make clear the company’s efforts to circumvent informed consent.
In one thread from February 2015, the company discusses ways of mass harvesting users’ private call and SMS logs without them being shown an Android permissions request that would make them more aware of what it was doing. When the company’s mass harvesting became known earlier this year, it argued that users had clearly and knowingly granted it permission to do so. Instead, as these internal emails show, the company went to extraordinary lengths to try and avoid users being aware of what it was doing with their personal information.
The mere fact that Facebook even discussed finding ways to harvest user data without their permission or awareness, regardless of what it ultimately implemented, reminds us of just how frightening the company’s privacy stance truly is.
Moreover, the company went as far as to explicitly acknowledge for the record that it was placing its own corporate growth ahead of user privacy, offering that it was a “pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.”
In his response to the release of the emails, Zuckerberg reiterated the fact that for all its public portrayals of lofty and benevolent ambitions, it is at the end of the day a for-profit company that must monetize its users. He particularly emphasized that the emails reflected that Facebook must be “economically sustainable.”
The company did not respond to requests for comment on the emails.
Putting this all together, despite desperately fighting the release of the emails and refusing to provide any additional context to the activities they document, Zuckerberg offered that “I understand there is a lot of scrutiny on how we run our systems. That's healthy given the vast number of people who use our services around the world, and it is right that we are constantly asked to explain what we do.”
In the end, perhaps Zuckerberg should take his own words to heart and offer greater transparency about the decisions the company makes each day that affect two billion people. If the company believes that it is ethically and morally sound to quite literally sell user data at ten cents an account in return for keeping the service free to users and economically sustainable, that should be discussed publicly, with its two billion users given a chance to vote on it, rather than a decision yea or nay made in secrecy by its digital dictator. Making its two billion users more aware of how their data is monetized or proposed to be monetized would allow them to make informed choices about whether the benefits they receive from the free service outweigh the frightening and Orwellian ways it is exploiting them. Zuckerberg complains that the media offers an unfair accounting of his company, yet he refuses to offer them a competing narrative beyond “trust us.” Perhaps it's finally time to stop trusting Facebook before its too late.
|
84b2ad385e7296f45240e7dfa80d121e | https://www.forbes.com/sites/kalevleetaru/2018/12/15/what-does-it-mean-for-social-media-platforms-to-sell-our-data/ | What Does It Mean For Social Media Platforms To "Sell" Our Data? | What Does It Mean For Social Media Platforms To "Sell" Our Data?
One of the most existential questions of the modern web is how online companies should generate revenue. The web of today reflects primarily one answer to that question: that of a web where everything is free, but we pay for it through our privacy. The web has become a dystopian surveillance state in which companies stalk their unsuspecting victims across the web, extracting maximal profit from removing any shred of privacy or dignity and socializing the risk of data breach or damage to the user, while privatizing all the monetary benefit of exploiting them. Social media platforms often generate the majority of their revenue through selling hyper targeted advertising based on algorithmically mining every second of their unwilling and unwitting users’ lives. Yet those same companies go to great lengths to argue that they are not “selling” their users’ data. What does it mean for companies to “sell” our data in today’s data hungry world?
The question of whether social media companies “sell” their users’ data was thrust back into the spotlight last week when a trove of internal emails from Facebook’s senior executives was released by the British Parliament. Among them was a chain featuring none other than Mark Zuckerberg himself proposing the idea of actually charging a monetary fee for developers to access user data, which they could repay either by purchasing advertising, selling items or simply writing Facebook a check. While the company took great pains to emphasize last week that it never ended up following through with the proposal, the mere fact that the company’s founder had openly discussed quite literally charging a per-user fee to access user data really drove home how the company views its users as monetizable entities being exploited by a for-profit company, rather than a benevolent company trying to connect the world and generating revenue only where it would not conflict with its public good vision.
Moreover, Zuckerberg’s equivalency between advertising revenue and writing a check demonstrates that the company sees little difference between selling access by advertising and selling access by check.
It is worth noting the stark difference between Facebook’s internal descriptions of its two billion account holders compared with how it describes them in its public materials. In public statements about commercializing account holders, Facebook goes to great lengths to use humanizing language like “people,” “person” and “anyone.” Yet in the 250 pages of internal emails released last week that show the unvarnished genuine language used by Facebook’s executives internally to describe account holders, the word “customer” shows up only once, in the context of the Royal Bank of Canada’s “customers.” The word “people” shows up in only two brief passages, both in the context of account holders providing content (whether posts or engagements) of value to Facebook. The word “human” never makes a single appearance. In contrast, the word “user” appears throughout the 250 pages, most notably when Zuckerberg himself refers to Facebook’s two billion account holders as “users” when proposing that the company could directly charge developers a fee of “$0.10/user per year.”
It seems that Facebook’s account holders are not “customers” since that would afford them a certain level of dignity and a relationship based around the company providing them a valuable service in a mutual transaction. They are only “people” when it comes to public statements and in the context of extracting monetizable behaviors from them. The rest of the time they are dehumanized through the term “user” to remind us that we are merely datapoints and login accounts to Facebook, not real human beings whose lives are being exploited and monetized for its benefit.
Over the last two weeks the company’s executives have sought to draw a distinction between monetizing users through advertising and monetizing them through boxing up their data for download like the data brokers they formerly purchased from.
Zuckerberg offered that “we’ve never sold anyone’s data” while Facebook’s Vice President of Advertising Rob Goldman argued “we don’t sell peoples’ data. Period. That’s not a dodge or semantics, it’s a fact. We don’t sell or share personal information.”
Of course, it is important to caveat that Facebook has previously argued that providing access to outside companies did not constitute “sharing” so long as it considered them to be “partners.”
Are Zuckerberg and Goldman right that Facebook does not “sell” user data? The answer revolves around what it means to “sell” data.
When we think about “selling” user data we typically think of a company boxing up the personal information of its customers and selling them as downloadable ZIP files with per user and flat rate pricing. Indeed, the enormous world of data brokers exists to do precisely this. Many companies we do business with, from the grocery stores and brick and mortar stores we shop at to the newspapers and magazines we subscribe to, box up their subscriber information and sell those lists for a profit.
Verizon reminded us this summer that even paying a subscription fee doesn’t mean a company won’t turn a side profit by further monetizing its customers by selling ads and even outright selling their data. For all the naïve talk about how a fee-based Facebook would end surveillance, Verizon reminds us that even those companies that charge a fee for their services will still monetize their users on the side.
Walgreens offers a useful comparison to Facebook’s definition of “selling” user data. While most Americans likely believe that their drug prescriptions are protected from any form of exploitation under medical privacy laws like HIPAA, it turns out that those laws permit pharmacies like Walgreens to monetize their users through advertising. Specifically, pharmaceutical companies can pay Walgreens to send an advertisement for a drug trial to all its customers that suffer from a particular medical condition. The pharmaceutical company itself is never given a list of patients, it merely hands the ad over to Walgreens and pays a fee and Walgreens sends the mailers itself.
For all intents and purposes, Walgreens has created an offline physical mail advertising model that mimics the hyper targeted digital ads that clog the online world. Like Facebook, the company is careful to argue that it does not “sell” its customer data, it merely sells access to those customers to show them advertisements. To a Walgreens customer that receives a mailer on behalf of a third-party company they’ve never heard of targeting them because of a prescription they filled at Walgreens and thought was confidential, the distinction between “selling data” and “selling access” is likely unimportant. As far as they are concerned, Walgreens sold their data. Notably, when asked why the company does not explicitly inform customers at purchase time that it will use their prescriptions to sell access to them, the company noted that under HIPAA, selling access to customers does not “require patient authorization.”
Facebook is therefore in good company when it comes to businesses drawing a distinction between selling access to their users for advertising versus boxing up their data and offering downloadable ZIP files.
Just what is Facebook selling? In his statement last week, Zuckerberg compared Facebook to a cloud computing company like Amazon and Google. Yet, developers turn to cloud vendors to purchase access to unique hardware and software environments, not data. As an Amazon or Google or Microsoft customer, you are renting empty computers to fill with your own data, the cloud companies don’t offer any access to their customer data of any kind.
In contrast, Facebook is in reality renting access to data. Its sole value proposition to developers is access to its two billion users. A giant manufacturer building solar power arrays doesn’t turn to Facebook to rent petabytes of storage and tens of thousands of processors and GPUs to run simulations and neural models. It turns to an actual cloud computing vendor.
The developers that turn to Facebook are there for one sole purpose: to reach Facebook’s two billion users.
Does that count as Facebook “selling” the data of two billion users? It certainly constitutes “selling access.”
To put it another way, if Facebook genuinely believes that developers view it as a traditional cloud computing vendor and that it is not “selling” its users’ data, then it could simply shut down all of its user APIs and allow developers to run their applications on Facebook without any ability to publish, consume or otherwise interact with its users. If access to users is genuinely not any part of Facebook’s value proposition to developers, then this would not have the slightest impact on usage of its platforms.
After all, Amazon has a robust cloud computing business without offering its cloud customers any access to the personal private information of its Amazon.com customers.
In arguing that Facebook’s business model does not count as “selling data,” the company offered the defense that “It’s how the internet works, not just how Facebook works.” In short, when asked whether its business model was morally defensible, the company responded not by arguing that it was, but rather by arguing that “everyone else does it” so it is ok for it to do it too.
This is noteworthy because it is exactly the same defense it offered me when I asked about its former practice of purchasing intimate data about its two billion users from commercial data brokers. Asked about the ethics of doing so and especially the opacity around its practices and its failure to provide users with more information about what was happening with their data, the company argued that everyone else does the same thing so it is ok for it to do it too.
Of course, the idea that Facebook does not “sell” its data belies the fact that it is often compelled by governments to “provide” its users’ private intimate data under court order.
In addition to merely “selling access” to advertisers and developers to reach its two billion users, Facebook also makes data available in other ways. Demographers wishing to create maps of specific combinations of traits and interests or understand their temporal changes can use advertising campaigns to create population scale insights.
Similarly, advertisers running ads that link back to their sites know that every person following that link possesses the specific traits the ad targeted. An ad targeting Catholic women 25-30 interested in football will result in click throughs of precisely those individuals to the advertiser’s site.
A New York Times editorial this week argued that such click throughs constitute a form of data sale in that advertisers can pay Facebook to receive traffic from specific demographics and that the resulting IP addresses that visit their site are thus known to be users with those traits. The author argues that this in effect constitutes a form of external data sale.
In other words, if Facebook considers giving a data broker a phone number and getting back demographic selectors about that person to be “buying” data, then a company paying Facebook to get IP addresses and demographic selectors would seem to fall under a similar category of a data transaction.
Facebook pushed back against the editorial, arguing that because advertisers only have an IP address and not the person’s name or contact details, that such data is in effect “anonymous.” In essence, as long as a person’s name and contact information are not attached to a record, that their IP address alone is not a unique identifier in Facebook’s view. As Goldman put it, “what makes it anonymous is that you won’t know who those people are,” only their IP address.
In reality, there are countless ways outside companies can reidentify an IP address to a specific user. There are numerous data brokers that sell the most recent IP address used by each person in their database, tying IP addresses to the address information those users enter into sites across the web, such as ordering products or entering surveys. Though, as with all data broker datasets, it is unclear how updated or accurate this information is.
Larger advertisers, including data brokers themselves, already track their customers across the web using cookies and know the most recent IP address each of their customers used to access their website or mobile app. They can run tens of thousands or even millions of ad campaigns on Facebook targeting each demographic of interest and simply cross reference the IP addresses of the clickthroughs from each campaign against their own records of which IP address is associated with each customer. While imperfect, such linking is no more error prone than the processes data brokers and companies use already.
Even if a clickthrough is not an existing customer, the demographic information implied by that clickthrough can be used to vastly enrich the customer’s website experience and purchasing record.
Imagine a user visits the site of a consumer products company out of the blue. The company knows absolutely nothing about that user other than inferring their geographic location from their IP address and estimating their rough demographics and purchasing power from the kind of computer and browser they are using. Now, imagine instead that that user came through a referral from the company’s Facebook ad targeting female millennial Bernie Sander supporters in New York City who rent, have a dog, work in the financial industry and love luxury coffee. The company now knows quite a lot about that person and can tailor the landing page to present a hand selected set of extremely relevant products. If the person purchases a product, they can then append all of those demographic selectors from Facebook to the customer’s profile to use for future customization and marketing.
Does the fact that this third-party company received demographic selectors from Facebook that it used to customize its site and enrich its customer record mean that Facebook “sold” it that data? The company would not have received that demographic information from Facebook without paying for it.
At the same time, Facebook’s argument is that since the data they sent to advertisers is identified by IP addresses rather than mailing addresses, phone numbers or person names, it should be considered “anonymous” data and thus doesn’t count as “selling” data.
Under this justification, Facebook could box up the totality of two billion users’ personal data and sell it at $0.10 a user per year as downloadable ZIP files so long as those ZIP files have the person’s name, address and phone number stripped out and uses only their IP address as their identifier.
As any data scientist or privacy expert realizes, however, the wealth of online data available means that an IP address is frequently enough to connect an “anonymized” record back to a real person.
Arguing that a customer record is “anonymous” and thus does not constitute “selling” data merely because it uses an IP address instead of phone number as an identifier is simply an absolute falsehood in today’s data drenched world. Facebook of all companies knows this.
Even if a record was stripped of all identifiers, including its IP address, unique combinations of characteristics could be used to readily reidentify customer records by comparing them against other holdings like data broker archives. In essence, the unique pattern of our behaviors acts as the equivalent of a digital fingerprint that can be used to reidentify us merely from our behavioral traces.
Facebook’s stance that stripping common identifiers is sufficient to render data “anonymous” even with an IP address attached helps explain its view towards its academic research initiative Social Science One and that it is acceptable to make its two billion users’ private intimate information available to academics across the world so long as they are “anonymous.”
Asked last month about its perspective on data sales, the company did not respond. It also did not immediately respond to a request as to how it views the threshold of anonymity of user data.
Putting this all together, in the end companies like Facebook may attempt to draw legal differences between “selling data” and “selling access” and that IP addresses still constitute “anonymity” but the reality is that the general public sees all of these monetization behaviors as the same exploitation of their personal privacy for monetary gain. Instead of arguing semantics, companies should take genuine steps towards regaining the trust of their users, starting with coming clean about all of the ways they exploit their users’ data and all of the ways they have considered using their data and no longer hiding behind arcane legal definitions. In the end, companies that ask the public to trust them must earn that trust.
|
40a5983455e40af2ce201ad7213abc75 | https://www.forbes.com/sites/kalevleetaru/2018/12/25/cybersecurity-is-providing-information-and-solutions-not-selling-fear/ | Cybersecurity Is Providing Information And Solutions Not Selling Fear | Cybersecurity Is Providing Information And Solutions Not Selling Fear
The vulnerabilities of the digital era have become increasingly mainstream. What was once exclusively the domain of specialized security professionals is now on the mind of the everyday website owner worried about hacking, site defacement, data theft, malware insertion, DDOS attacks and any number of other genuine threats to their business and the safety of their customers. Unfortunately, as cybersecurity has become a hot public-facing field, companies are increasingly harnessing this genuine fear to commercialize and sell fear itself. Instead of treating cybersecurity as a legitimate business threat to be solved through information and knowledge-driven best practices, such as continual patching, vulnerability testing and so on, companies are selling opaque black box “vulnerability” measures that tell site owners that they are vulnerable to hacking based on one of hundreds secret indicators that they won’t share, but if the site owner purchases their service they will be magically protected. Hosting providers offer free security scans but charge their customers to learn more when those reports suggest there is a problem like malware. Most recently, an experience with SiteLock and Domain.com this past week offers a lesson about the state of consumer-facing website security.
SiteLock is one of a growing stable of website monitoring companies that sell what amount to daily vulnerability scans (along with other services) that crawl a customer’s website and look for outdated libraries, vulnerable plugins or CMS configurations, the presence of malware and other security risks. Their primary target tends to be the consumer market where users typically have little understanding of basic cybersecurity and may simply configure a basic WordPress installation, install dozens and dozens of plugins, leave default passwords and backdoors open to make it easy to update the site and then leave it as-is and unpatched for years.
For these site owners, information-based website vulnerability scanners can be a useful and effective tool, offering customized guidance and services to increase the safety of their website.
Public cloud companies like Google and Amazon provide a wealth of vulnerability scanning options, from their own inhouse offerings on through a rich and vibrant third-party ecosystem. The consumer and developer-oriented versions of these scanners operate along the lines of a word processor’s spell or grammar check, walking a user step-by-step through each issue they found, explaining what was found, the risk it presents, the severity and immediacy with which it needs to be addressed and the actual step-by-step instructions for fixing it or even a button to automatically fix it, along with potential ramifications from fixing it, such as breaking other portions of the site. Some scanners can even perform full-fledged fuzzing and vulnerability testing of a site’s dynamic elements, searching for common vulnerabilities like SQL injection, improper Unicode sanitization, invalid input assumptions and so on.
The emphasis of these scanners is one providing users clear, concise and actionable detail on exactly what was found, its severity and precisely how to fix it.
In contrast, many of the consumer-oriented vulnerability scanners outside the cloud world tend to lump in vague business threats with genuine immediate security risks. For example, last year I received a call out of the blue from an unknown phone number claiming to be Network Solutions, telling me that it had run a vulnerability scan on one of my websites and found it to be at elevated risk of malware infection and that if infected my site would be immediately suspended. In the end, it turned out to be merely a poorly executed sales pitch from Network Solutions to sign its customers up for services from a company called SiteLock from which it receives a commission.
SiteLock provided me a report at the time which indicated my site had a Medium risk of being infected with malware because it triggered one of their 500 proprietary secret indicators, including that the site has a high amount of traffic. While it is legitimate to argue that a high-traffic site is of greater interest to hackers than a zero-traffic site, most businesses would likely welcome high volumes of customer traffic to their websites. In other words, warning a website owner that their site might be hacked because it is a popular site is perhaps useful to remind them of the dangers of popularity, but it isn’t immediately actionable.
More to the point, unlike the vulnerability scanners provided by the commercial cloud companies, SiteLock’s report merely indicates that a site has triggered one of its 500 secret indicators, but when asked for more detail on what those indicators are or which were triggered, the company refused to provide any information of any kind, arguing its indicators are proprietary. In short, it operates as an opaque black box that provides neither actionable insights to help site owners take immediate corrective action nor any detail on the precise issues SiteLock believes make their site at risk.
Instead, customers are told that if they want more information about what’s wrong with their site or any detail on what needs to be fixed, they can purchase a subscription to SiteLock’s services or hire its professional services team.
This past Saturday I heard from SiteLock once again, this time through another hosting provider I use, Domain.com. At 3PM in the afternoon I received every website owner’s worst fear: an email alert titled “Your SiteLock Update: Malware Detected on [website]” with the name of my website and a message saying that as part of my hosting package provided by Domain.com “a SiteLock website scanner” is applied to my site and that “during a recent scan of [website], malware was detected on your website.” The email provided no further detail, only a phone number and email address to contact for more detail.
Given the email’s utter lack of any detail of any kind about my account that might allow me to verify its authenticity and especially given that I am not a SiteLock customer, I initially assumed the email to be a phishing attempt. However, upon investigating the email headers, I was able to confirm that the email was indeed sent from domains under the control of SiteLock. Additionally, Domain.com lists SiteLock as one of the products it offers, while SiteLock lists Domain.com as a hosting partner.
The site in question that SiteLock claimed to have found malware on is a single-page single-line HTML redirection page that simply redirects visitors to a different website. The redirection page is actually automatically generated by Domain.com with no ability of end users to override it, making it even less likely to be compromised other than through a compromise that affected all of Domain.com’s hosted websites.
Upon verifying that the page appeared to be unchanged and did not contain any malware and logging into my Domain.com console to confirm that the settings remained unchanged, I was stumped. How could SiteLock be detecting malware on a redirection page that consisted of all of a single line of HTML and which is managed by Domain.com itself?
After verifying that the phone number in the SiteLock email was the same as listed on SiteLock’s website and that the number remained unchanged from previous snapshots of SiteLock’s site available in web archives (to ensure that the site had not been compromised with its contact information changed), I called the number desperate for more information on the malware they had detected. After waiting on hold for nearly half an hour, I finally gave up. Despite verifying that SiteLock claimed to have 24/7/365 support, I just assumed that perhaps that information was out of date and that they were closed for the evening.
I then spent many hours looking for any possible way the site could have contained malware, walking through every conceivable scenario, to no avail. I could not see any way the site could contain malware, yet here was SiteLock claiming it did.
Working under the assumption that perhaps Domain.com had had an issue with its redirection pages, I emailed Domain.com’s support staff with an urgent security alert noting that SiteLock appeared to have detected malware on their automatically generated redirection page.
Two days later I still have received no response, illustrating that for the major hosting companies, even an urgent cybersecurity notification of possible major system compromise does not warrant any kind of timely acknowledgement or response.
I tried logging into SiteLock’s customer website, only to receive an automated error that I was not a SiteLock customer. Searching through Domain.com’s portal I was presented with a screen saying that SiteLock was an optional extra that could be purchased for an additional fee, $24.99 a year for the most basic “Find” service, $251.07 for three years for its “Fix” service, $49.99 a month for its “Prevent” service, $59.99 a month for its Enterprise package and $39.99 a year for its “WP Essential” service. The $251.07 for three years “Fix” service was automatically highlighted for me as the default option, but no information beyond a few marketing bullets as to what was in each service.
Searching further, I confirmed that there was absolutely no way for me to access either SiteLock’s customer portal or any information from within the Domain.com portal without purchasing a subscription to SiteLock.
In short, I had just received an email telling me there was malware being actively served from my website and that I would have to pay a fee to find out what the malware was and where on my site it was being hosted from.
Giving up for the night, I called the company again Sunday morning, waiting on hold once again before finally giving up, uncertain whether the company was even open for business over the weekend. Finally, calling again that afternoon and waiting on hold for 17 minutes I reached a human support specialist.
The staffer knew immediately why I had called and said that Domain.com runs its own proprietary scanning software in addition to SiteLock’s scanning and that when Domain.com detects malware it sends a list of the site owner emails to SiteLock to send notification emails to the users. According to the staffer, Domain.com introduced a bug in its scanning system on Saturday that caused all of the false notifications to go out.
The staffer noted that the bug had been rapidly identified and fixed, but that SiteLock had been inundated with thousands of calls Saturday evening through Sunday from customers who had received the errant malware alert.
If the company knew about the error almost immediately and its call center was being barraged with thousands of concerned callers, this raises the question of why SiteLock did not immediately send out a correction email to all affected users telling them that the malware email was in error. When asked, the staffer agreed that should have happened, but said he did not know why SiteLock was not sending a correction email.
When I reached out to SiteLock for comment on Sunday and asked why it did not send a correction email out, I did not receive a response, but four hours later the company finally sent an email titled “Sorry for the misunderstanding” and that “during a recent scan, a false positive mistakenly occurred generating an email in error. We apologize for the inconvenience and assure you the problem has been fixed.”
A Domain.com spokesperson contested the SiteLock staffer’s explanation that the error was Domain.com’s, stating instead that “SiteLock had a system issue yesterday that caused them to email a very large number of their customers and potential customers and erroneously notify them of malware detection.” Yet, when asked to reconcile this with the SiteLock staffer’s explanation that it was a Domain.com scanning issue, the spokesperson said the company had no further comment. A SiteLock spokesperson later confirmed that both SiteLock and Domain.com run their own separate scans and that the errant email came from SiteLock’s scan. Yet, when asked about the staffer’s claim that the error was from Domain.com’s scan, the company did not respond.
Even if the fault lay entirely with SiteLock, one might have expected Domain.com to immediately send its own email to its customers telling them that if they received a malware alert from SiteLock to ignore it, that it was a false alarm. Instead, Domain.com left its customers to worry they were actively serving malware for an entire day. When asked why the company did not send a notification to its users, the company said it had no comment.
When asked why SiteLock did not send a correct email out until more than 24 hours after its malware alert, a company spokesperson offered that it had waited until “the scope of the notification issue was fully researched and understood so that we could communicate appropriately.” When I noted that its own staff had indicated that the company was fully aware of the situation shortly after it happened on Saturday, the company did not respond.
When asked why SiteLock didn’t immediately send an email out to all those who had received the initial email to say that it may have been in error and that it will be sending an update with confirmation when it knows more, the company did not respond. I noted that its staff said SiteLock had already rescanned all of the sites in question and that they were able to pull up the records for my site and confirm over the phone Sunday morning that my site was not infected. When asked why SiteLock did not send a correction email to all sites that received the initial email and which its subsequent scan confirmed were not infected, the company again did not respond.
Accidents happen. Even the most sophisticated companies with massive code review bureaucracies and elaborate deployment checklists can inadvertently push a bad update out. The issue here is not that SiteLock sent an errant malware alert to Domain.com’s customers. Rather, the issue is that the email did not contain any actionable information for the user to triage the situation, non-SiteLock customers had no ability to access any information about the reported malware and the company waited more than 24 hours to send a correction email to affected users, while Domain.com did absolutely nothing to assist its customers.
A website that is actively serving malware to visitors is an incredibly serious situation and could indicate that the site has been breached and that customer data may be stolen as well. Waiting more than an entire day before telling users that a malware alert was in error is immensely irresponsible in today’s day and age.
If SiteLock’s initial email had contained at least some detail about the reported malware infection, it would have at least assisted users like myself to triage the notification and recognize that it was almost certainly in error. For example, if the email listed the name of the malware and the URL it was being served from, I could have immediately verified that that URL did not exist on my site and that Domain.com’s logs showed that it has never existed.
Instead, I was left in a total information vacuum.
When asked why SiteLock does not include at least some basic information about the detected malware in its alert email, a spokesperson offered that “We don’t provide details in the emails due to security reasons” and that customers can log into SiteLock’s website, the Domain.com SiteLock interface or call customer support for information and that “we provide details to our customers in a secure environment to ensure they can make educated security decisions.”
If a customer’s website is actively serving malware, it is unclear why SiteLock believes that including the URLs of that malware in an email to the customer is an unacceptable security risk. After all, if a bad actor has compromised the customer’s email systems to the point of being able to read all of their email, the customer has more serious issues to worry about than that bad actor being alerted to the URL of a virus being served from their site. More to the point, the mere presence of a malware alert email would tell the bad actor that their malware has been discovered.
The company did not respond when asked for more detail on why it does not provide even the most rudimentary of detail in its emails.
However, there is an obvious possible explanation for this lack of detail. The Domain.com spokesperson confirmed that Domain.com provides a free SiteLock malware scan to all of its customers that do not subscribe to SiteLock. This free plan does not come with access to the scan results like paid plans have, only an alert that malware has been detected. As Domain.com put it, these are "potential customers."
A Domain.com customer that does not pay for SiteLock will receive an email saying a critical problem has been detected with their site that could lead it to be shut down and to see more details they must pay for a SiteLock subscription or call a SiteLock sales representative. The company did not respond when asked whether a user would be provided any detail about critical issues like active malware serving without first paying for a SiteLock plan.
Indeed, SiteLock’s Hosting Provider Partner Program describes its relationship with hosting providers like Domain.com in terms of revenue generation. In the company’s words, "SiteLock offers the best opportunity to capitalize on the high demand for website security" and that it "helps to increase revenue," “increase conversion rates,” “enable business growth” and "earn more money." SiteLock notes that its reseller program has generated more than $20M in revenue for its partners and even offers “dedicated … support on sales and marketing efforts.”
Most notably, SiteLock’s partner program focuses almost all of its verbiage on the revenue opportunities of selling security rather than touting its program as a way for ISPs to secure the sites they host. In fact, the security benefits of SiteLock to hosting providers are mentioned only once on the entire page, compared with the rest of the page’s emphasis on the money to be made selling security.
Put another way, part of SiteLock’s business model is to provide free scans to all Domain.com customers, but when an issue is found, the user must pay for a SiteLock subscription to gain access to the SiteLock portal with the full details about what was found.
It is not hard to imagine a small business owner receiving an email that their site is serving malware, rushing to the Domain.com portal for more details, being told the best option is to pay $251.07 for three years and just clicking ok in the panic to get any kind of information on what has just happened to their website.
In essence, this model would be akin to antivirus companies providing their software for free to customers, but if a virus is found, the panicked user must pay to learn more and fix the problem.
Putting this all together, website vulnerability scanning, application testing and active defense solutions are useful components of modern cybersecurity posture, but we must be cautious that the way in which those products are marketed to consumers focuses on improving the real and actionable security of websites, not selling fear or treating security as a profit center. When hosting providers cold call customers out of the blue and tell them their sites are at risk of being compromised, but that those risk factors are secret and customers should just sign up for a paid service to receive protection, rather than being given real actionable point-by-point technical details on what was identified as problematic about their site, the product being sold isn’t actionable security information, it is fear. When one of those “risk factors” is that high traffic sites are more likely to be hacked than sites that receive zero traffic, it raises the question of just how precisely such indicators are useful to businesses that are likely willing to accept the increased risk of attack that comes from high traffic and sales. Similarly, offering free vulnerability scanning, but requiring users to pay to learn more when a vulnerability is found is raises grave questions about why hosting providers don’t provide that information for free, given that if one of their websites is serving malware or viruses or has been compromised, it brings risks to them as well. The fact that SiteLock’s partner program emphasizes security as a revenue generator, rather than a way to improve customer and platform security is illustrative of how the consumer industry sees security: as a moneymaker, not a threat mitigator. In the end, cybersecurity is far too critical to the safety of the web to treat as a money-making business rather than the systematic securing of our digital future.
|
76791738d3f2a6300ca9d47907852601 | https://www.forbes.com/sites/kalevleetaru/2019/03/10/deep-learning-powered-fake-faces-will-transform-catfishing/ | Deep Learning-Powered 'Fake Faces' Will Transform Catfishing | Deep Learning-Powered 'Fake Faces' Will Transform Catfishing
Catfishing and the use of false profiles to scam and extort individuals online has become an unfortunate fact of our modern digital life. Even prison inmates have used such tactics to extort money from US servicemembers. Yet, catfishing has been limited to some degree by its traditional reliance on preexisting imagery and personas misappropriated from innocent user accounts to create the fake catfishing profiles. As deep learning approaches increasingly allow the creation of entirely artificial faces and voices, we are fast approaching an era in which catfishing risks overwhelming the world of online dating and being used as an intelligence tool.
Automated approaches to generating entirely artificial human imagery have improved dramatically over the past few decades, from primitive systems useful only for the creation of background characters in large crowds to state-of-the-art systems capable of generating fake human faces almost indistinguishable from photographs.
Recent work from Nvidia, seen below, attracted substantial attention last year for the uncanny realism of its entirely artificial faces and the degree of control it provides over their appearance, making it possible to create faces reflecting any demographic background and age.
While the resulting imagery still contains tell-tale imperfections that give it away to the trained eye, the average person is unlikely to be able to differentiate even current generation imagery from genuine photographs, especially when viewed on the small screen of a mobile device while on the go.
Such imagery poses a particularly unique threat in its applicability to catfishing. Rather than repurpose imagery found on the web, which can often be unmasked through a reverse Google Image search, imagery created by deep learning techniques like generative adversarial networks result in entirely novel imagery that has never before existed.
The ability to rapidly create demographically customized false facial imagery is particularly dangerous in its ability to transform catfishing from a hit-or-miss operation that relies on scale into a highly targeted process. One could imagine an automated customized catfishing operation that mass monitors accounts on social media and online dating platforms, perusing a targeted individual’s photographs and posts to compose a detailed biography of their interests. An artificial love interest could be constructed that is perfectly matched to their interests but exists only in the code of the neural network that created it. The artificial persona could even be depicted in activities enjoyed by the target, from outdoor hiking to nightclub dancing.
Advances in conversational AI and voice synthesis are almost to the point where such an artificial entity could even engage in cursory phone conversation with the targeted individual and send them flirty videos to complete the charade. Text generation could easily produce a stream of chat messages and posts for consumption by the target.
As the relationship progresses, the digital creation could post a steady stream of new content to prove it is a “real” person, such as compositing itself into a photograph of a coffee shop recommended by the target and thanking them for suggesting such a wonderful place. The ability to react fluidly and generate a continual stream of new imagery tailored to the current conversation would make such digital imposters almost impossible to detect.
Once a convincing relationship has been established, all of the traditional monetary or other extortion methodologies of catfishing can be applied.
Unlike traditional catfishing exercises that can require a fair degree of human intervention on the part of the scammers, this new form of catfishing could be entirely automated at a platform scale.
Imagine a foreign government searching LinkedIn for every DC-based individual working for the government, a defense contractor or mentioning any kind of terminology in their resume that suggests access to sensitive information. All of those accounts could be cross-referenced to the individuals’ social media accounts and dating profiles. Deep learning algorithms could construct the perfect love interest for each of those hundreds of thousands of people, complete with years of photographs dating back to their early college days showing a personality and approach to life that could not be a more perfect fit for the targeted individual. Since love doesn’t always follow expectation, hundreds of alternatives could be created for each person, each designed to try another path to the individual’s heart. When a match is finally found and sufficient compromising information collected by the bot, the account could be handed over to a human intelligence handler to convert the target into an intelligence source, a funding stream or publicize the information to sabotage a government agency at a critical moment.
None of this represents a distant science fiction, but rather the very real present.
Putting this all together, as we contemplate the growing risk from AI-generated artificial imagery, catfishing and related online extortion is an area we should be carefully considering.
In the end, like the plot of many a science fiction novel, it may not be that long before we can no longer distinguish between humans and machines online.
|
048d8d8171402cc9ab53f151a4cca523 | https://www.forbes.com/sites/kalevleetaru/2019/03/12/facebooks-deletion-of-warrens-ad-reminds-us-it-can-silence-debate-about-itself/ | Facebook's Deletion Of Warren's Ad Reminds Us It Can Silence Debate About Itself | Facebook's Deletion Of Warren's Ad Reminds Us It Can Silence Debate About Itself
Facebook logo. (JOEL SAGET/AFP/Getty Images) Getty
Facebook made headlines yesterday when it removed an ad by Sen. Elizabeth Warren that criticized its nearly unfettered power over the modern digital world. Proving the senator’s point, Facebook moved swiftly to delete the very ad that called attention to its ability to delete content it didn’t like. Only after the story went viral did the company finally begrudgingly back down and restore the ad "in the interest of allowing robust debate." What does this tell us about the ability of companies like Facebook to censor criticism of themselves?
In a twist of irony that even Hollywood couldn’t have come up with, yesterday Facebook deleted an ad that drew attention to its ability to censor debate it disagreed with. The company’s ability to silently remove an ad that criticized itself and called for action that would impact its bottom line offers an extraordinary commentary on the unrestricted power the major social platforms enjoy today over our online debate. Just a handful of companies control everything we see and say online using opaque and constantly changing rules that we have no right to access, few rights to appeal and operate under total darkness with no independent external review or oversight.
While Facebook noted that it did not delete other advertisements by Sen. Warren that called attention to its practices, the ability of the company to delete what amounted to a public service announcement by a democratically elected official of the U.S. Government is breathtaking and reminds us that our social platforms are no longer “American” companies, but rather global entities serving only their own bottom lines.
Only after a viral backlash did the company finally back down and restore the ad, citing the removal as merely being a routine result of it featuring the company’s logo in a manner with which it disagreed and that its policies on logo use granted it the authority to remove any content that utilized its logo in such ways.
Lost in this response is the implicit underpinning that the company’s policies have supremacy over public debate in a democracy. Facebook cited its logo policies as the reason for the removal of the ad but has the ability to change those policies at any moment.
What would happen if the company modified its logo policy to prohibit the mention of its name or logo in any ad that criticizes the company? With the insertion of a few words into its policy webpage, Facebook could instantly silence all debate about its role in society.
First Amendment rights to free speech do not apply to the private walled gardens of social media platforms, meaning we have no legal right to force the company to permit discussion of any topic it disagrees with. There is absolutely nothing stopping Facebook from simply deciding to ban any criticism of the company and if it did, there is absolutely nothing the public could do to force it to overturn that ban.
In fact, given that an ever-increasing percentage of our information consumption occurs through social media, the general public might not even know anything had happened.
If Facebook extended that ban to all posts and messages, it could effectively end all debate about itself and the world would never know.
It is notable that Facebook’s response to the ad removal was that it was restoring it “in the interest of allowing robust debate.” In other words, the company wasn’t changing its policies or crafting a general exception that would hold for the future. It was making a one-time reversal for a single ad.
Perhaps the most frightening aspect of this entire story is that it reminds us that Facebook is not afraid to censor debate involving itself. This raises the question of whether Warren’s ad was the first ad or post critical of the company to be removed. Has the company historically been deleting posts critical of itself, either deliberately or accidentally?
Unsurprisingly, the company did not respond to a request for comment.
It also did not respond when asked whether it would permit a third party independent external review of its content and ad moderation practices to examine whether it has historically or currently inappropriately stifled speech that has criticized the company’s business practices, calls for it to be broken up or public discussion of privacy and digital rights.
Indeed, every single time I’ve asked the company whether it would permit external independent review of its activities, it has always responded with either silence or by declining to comment.
Putting this all together, Facebook’s removal of Sen. Warren’s ad reminds us of the company’s absolute control over its platform and its ability to remove speech it disagrees with, either because of its content or the way in which it references the company. As debate over Facebook’s outsized power grows, there is nothing stopping it from modifying its policies to outlaw such debate. The company’s continued refusal to commit to external review of its moderation practices reminds us that we simply have no way of knowing whether it has already been actively censoring our debates on digital privacy and its business practices.
In the end, we must confront the most existential question of the digital age: has Facebook grown so powerful that it is now simply too late to do anything about it?
|
d479c6e9377aece5a0af48962328a5c2 | https://www.forbes.com/sites/kalevleetaru/2019/03/17/misinformation-and-fake-news-are-genuine-threats-not-marketing-buzzwords/?fbclid=IwAR2UI6OIjhajqTz3Z8Z9JVc0td0-KK3wPxnQvasyFmAaYkgTU7RxX9uQ6F4&sh=6d9a1ef015dc | Misinformation And Fake News Are Genuine Threats Not Marketing Buzzwords | Misinformation And Fake News Are Genuine Threats Not Marketing Buzzwords
Soviet propaganda displayed in the Cold War museum in Plokstine, Lithuania (Getty Images). Getty
The word “misinformation” has become modern-day fairy dust, sprinkled across grant applications, research papers, job postings and public statements to attract fame and fortune. Every researcher and pundit who has ever sent a solitary tweet now claims to be a social media misinformation expert with a sure-fire way to eliminate falsehoods from our digital world. As “misinformation” joins its brethren “big data,” “social media analytics” and “deep learning” as the latest marketing buzzword instead of a genuine societal threat to be taken seriously, we are increasingly commercializing the field into sham science based on false findings derived from bad data wrapped in a shiny wrapper of technological hype and hyperbole. How do we restore scientific rigor to the study of misinformation and where do we go from here?
When it comes to the state of misinformation research today, the National Academies put it best when they offered that their report for the US intelligence community centered on Twitter not because it was the most important conduit of misinformation, but because “it is the most available platform for scientists to use” and “has been the most studied.” When our most austere and prestigious scientific institutions advising our national security apparatus on how to assess and combat misinformation focus on Twitter merely because it is the easiest dataset for academics to get their hands on, it is clear that misinformation is no longer being taken seriously by the research community.
Not a day goes by that my inbox isn’t filled with press releases, publication announcements, funding calls and job ads with the words “misinformation,” “disinformation” or “fake news” somewhere in the title. It seems everyone today is a misinformation expert.
Amazingly, there seems to be precious little awareness of the long rich history of propaganda and misinformation research or much engagement with the fields that traditionally study how societies produce, consume and act upon information.
It is sadly amusing to see the flurry of weekly headlines as yet another Silicon Valley luminary or celebrity academic claims to have coined a new way of thinking about how social media is disrupting society. From social capital to surveillance capitalism to our attentional and information seeking behaviors, all of these “newly discovered” conceptualizations are actually age-old concepts taught in first-year graduate courses in any information science program.
As “misinformation” becomes a marketing term embraced across disciplines, we’re losing touch with our rich literature on how information and falsehoods spread through societies.
Researchers today fixate on the novelty of our latest shiny new technology, rather than stepping back to ask whether it is truly any different from that which has gone before.
We speak of the web’s challenges today as unprecedented dangers that society has never before faced, while failing to remember that almost down to the exact wording, these are precisely the same issues we confronted with the rise of television half a century ago. It seems all of our “new” challenges are merely the same ones repackaged in a shiny new wrapper.
Researchers grapple with how to understand information behaviors in the social media era, “inventing” new theories and publishing best-selling books “pioneering” new theories that were already considered historical standard practice when I myself was a fresh doctoral student a decade and a half ago. We have forgotten to look at what has gone before.
As governments and societies across the world grapple with how to address what is popularly described as a frightening new post-truth dystopia, it is worth looking back a century ago to World War I as the first modern propaganda research began to look in a systematic and scientific way at how information was used in influence campaigns. The leadup to World War II entrenched this way of looking at information as a tool of warfare and foreign influence, seeding what would grow to become our modern fields of opinion research and propaganda analysis.
It was 80 years ago this November that the Princeton Listening Center launched what would eventually grow to become FBMS, FBIS and eventually the Open Source Center, seeding the systematized collection and analysis of open source information across the world as the home of OSINT for the US intelligence community. In fact, FBMS’ very first analytic report, dated December 6, 1941 famously foreshadowed the events of the following day and was a critical source for understanding the wartime propaganda and falsehoods spreading across the world.
For all our hand wringing and proclamations that we have entered an unprecedented era in human history of misinformation and “fake news,” the reality is that today’s informational challenges are little different than those of the eras that have gone before. Every one of the information challenges confronting us today, from profit-driven “fake news,” to state-sponsored misinformation campaigns designed to disrupt and sow discord, to well-meaning members of the public promoting conspiracies and falsehoods, were all the very same issues facing our societies a century ago. The technologies used to distribute those falsehoods may be different, from newspapers, radio and television to websites, social media and livestreaming, but the societal impacts are precisely the same.
There is much we might learn from this earlier era of misinformation research.
This wartime era of misinformation research was marked by a seeding effect in the US in which the nation’s academics became actively engaged in the service of the government’s wartime propaganda and misinformation research, developing new approaches to assess and combat such efforts in both realtime and the real world. Rather than abstract theories and unsupported opinions, the research community was testing its theories in the live laboratory of global warfare.
Most importantly of all, at the conclusion of hostilities, these academics returned to their posts across the country, seeding battle-tested new understandings of how populations and governments across the world produce and consume information, giving birth to entirely new applied approaches to assessing the informational landscape.
In contrast, our modern era of misinformation research has been defined by a marked lack of any practical understanding or experience in informational flows in the real world and outside the West. Today’s academics trade once again in abstract theories and hype without the background of their predecessors in seeing how those hypotheses play out in the real world.
Globally, across all of the researchers and organizations tackling misinformation issues today, precious few have any applied background at all in propaganda research and society-scale informational behavior. For its part, the intelligence community has steadily deemphasized the kind of inhouse applied expertise and real-world experience that would provide the insights from which researchers of a century ago benefited. Across our allies, OSINT centers are being rapidly defunded and dismantled, their functions scattered across the intelligence community rather than reinforced into a centralized core of experience and expertise. Historical collaborations and embedded fellowships are similarly drying up, replaced by traditional contracting and firewalls, removing the traditional free flow of information between practitioners and theorists.
In the place of experienced practitioners, today’s efforts to combat misinformation are being led by academics chasing the latest fad and the commercial sector seeking to protect its economic interests. The academic world’s lack of real-world experience means it has spent much of its time rediscovering the basics of other fields and promoting quick fixes that fail to comprehend the realities and complexities of how societies actually engage with information. The commercial world similarly promotes quick technological solutions that cost little to implement and minimize the impact to their bottom lines.
The result is an almost sadly comical world in which our most esteemed institutions draw heavily from Twitter in their guidance to the US intelligence community on combating misinformation, not because Twitter is the most important conduit of misinformation, but because it is the one easiest for academics to obtain and publish with.
Most remarkably, researchers across the world are studying misinformation phenomena and making claims published in top journals without having any idea whether the trends they’ve identified are actually related in any way to misinformation or whether they are merely the background patterns of the datasets they are studying.
The spread of misinformation through a platform is entirely dependent on how people use that platform, yet we understand almost nothing about our social platforms or how they are changing in ways that directly impact the flow of information across their digital borders.
Misinformation “experts” tout findings based on absolute volume counts and changes in retweeting and linking behaviors they wrongly claim are distinct to misinformation flows, but in reality merely reflect the background behavioral changes of Twitter itself.
How can we seek to understand the flow of information on social platforms when we don’t even know how many posts per day there actually are on those platforms or how they are changing?
How can we hope to address the flow of misinformation across social media when almost all of our research is focused on a single platform that is not even the dominate source of news, but rather is simply the easiest for researchers to get their hands on?
Putting this all together, for society to have any hope of genuinely addressing the growing threat of misinformation to the functioning of democracy and society itself across the world, we need to stop treating it as a shiny buzzword to sprinkle into our research like fairy dust and instead treat it as the genuine threat it is, deserving of nothing less than rigorous scientific research, not hype and hyperbole.
In the end, perhaps if we look back to how today’s challenges mirror those of the past, we might have much to learn from how our predecessors addressed everything from misinformation to our evolving gatekeepers and do so before it is all too late.
|
2384c68dd2e81478fdf085fa57688655 | https://www.forbes.com/sites/kalevleetaru/2019/03/19/the-problem-with-ai-powered-content-moderation-is-incentives-not-technology/ | The Problem With AI-Powered Content Moderation Is Incentives Not Technology | The Problem With AI-Powered Content Moderation Is Incentives Not Technology
Facebook logo. (Jaap Arriens/NurPhoto via Getty Images) Getty
As we discuss once again the role of content moderation in removing terrorism, hate speech and other violent and horrific content from our digital platforms, there has been considerable discussion over what our modern AI and signature-based content removal systems are capable of. Given the lack of public awareness of how these systems function and their deployed production capabilities, it is worth looking more closely at how automated content moderation works today and especially the cost-capability tradeoff and the lack of incentives for platforms to remove horrific content given that they profit monetarily from such material.
It is important to step back and look at what technology is and is not capable of today when it comes to image and video content moderation.
Outside of the major social media companies and their academic collaborators, there are few who have real-world experience applying image recognition algorithms to global content spanning across countries and cultures and thus considerable misunderstanding about how these algorithms work and their strengths and limitations when deployed in the real world.
Partially this is due to the companies’ own marketing machines, which spend most of the year heavily touting the extraordinary accuracy of their AI tools, bringing reporters in to showcase how their image recognition models can now differentiate between broccoli and marijuana. When things go wrong, there is a jarring juxtaposition as those same companies suddenly emphasize how primitive and limited their algorithms are, before going right back a few weeks later to touting them once again as infallible enough to scan their platforms in production.
Our AI-powered visual recognition algorithms are far from infallible. Yet, they are vastly more capable at flagging depictions of violence than most people realize.
Over the past three years my own open data GDELT Project has scanned more than half a billion news images from all across the world, totaling more than quarter trillion pixels, running all of them through some of the most advanced commercially available visual recognition algorithms, generating more than 300 billion datapoints describing their contents to understand how visual narratives spread globally. The lessons learned from this initiative tell us much about what these tools are truly capable of when deployed in production at global scale across the world’s cultures and geographies.
Perhaps most importantly, these lessons remind us of the critical influence of the cost-capability tradeoff.
The visual recognition tools available on the market today are exceptionally good at picking out not just the presence of weapons in an image or video frame but identifying their precise make and model and where in the image they appear with respect to other objects. They are able to recognize the presence of fluids, even trace amounts barely visible to a human observer and facial expressions and body positions suggestive of violent circumstances or morbidity. They are able to recognize not just structural damage but estimate whether it was due to natural causes like wind damage, mixed causes like fire or human causes like a military airstrike. They can even put all of these pieces together and estimate the overall “violent” intensity of an image.
Coupled with visual assessments, for videos and audio recordings there are highly accurate tools for recognizing gunfire and human non-speech utterances indicative of violence. There are even AI models that have been adapted to recognize video game violence, rather than the real-world imagery most are trained on and on the infrared nighttime imagery of bodycams.
The algorithms can trigger in unexpected ways, such as flagging an image of a fish market or the backroom of a grocer’s meat department, though for good reason, given that these filters are often trained on violence in all its forms, rather than just against humans. Here as well, the algorithms can be tuned to flag only on specific kinds of violence.
It is important to acknowledge that our current algorithms are most certainly not perfect. They can both miss content (false negatives) and incorrectly flag unrelated content (false positives). Yet, all deep learning algorithms can be tuned, either through the selection of the training data they are built upon or the output of their various probability information as a confidence score. These scores can be used to flag only imagery the algorithm is absolutely confident in, risking missing considerable material, but allowing autonomous removal, while simultaneously flagging low-probability content for human review, ensuring little is missed.
After three years of scanning global news imagery from nearly every country worldwide, a key takeaway from the GDELT Project’s own scanning using such tools is that they are vastly more accurate than the public realizes.
So, why do they keep missing things?
The reason goes back to how they are tuned and the context in which they are used.
Most of the algorithms used for production scanning by social media platforms are tuned for speed and minimal classification categories to reduce their computational burden. Rather than generating millions of categorical and characteristic labels and their probabilities for every image and every frame of every video, allowing realtime adjustments and fine-grained dispatch to human reviewers for confirmation, most of the production filters used by major social platforms tend to be closer to binary filters that either flag an image for removal/review or not, making it harder to blend the strengths of the machine and human reviewers together.
Perhaps the biggest reason is context. Not all violence is seen through the same lens. A video of government security services firing on unarmed protesters might be something society believes is important to publicize. Governments, on the other hand, would love to suppress such footage, allowing them to deny their repressive tactics as “fake news.”
After all, within the US, it was the imagery that emerged from Kent State on May 4, 1970 that galvanized public opinion in a way that no mere written description of the day could have done.
Imagine if the US Government had been able to prevent the publication or distribution of all imagery from that day, leaving only textual descriptions that it could easily dismiss as false.
It is not hard to imagine the Russian or Thai governments using such rationales to ban footage of their security services using force against peaceful protesters and then using the lack of available footage to claim the events in question never took place.
Therein lies one of the great struggles of social platforms. Their algorithms and human reviewers may flag imagery as depicting violence, but they must weigh those determinations against the question of whether that depiction is gratuitous or documentary.
Citizen videos capturing the use of force by police officers is perhaps the most common use case that tests these boundaries. Is a video released by a victim’s family through a prominent civil rights organization a documentary video that should be visible for others to judge themselves and use to hold the police accountable or should all police interactions with the public be withheld from public view if they depict any form of violence, regardless of the wishes of the victim’s family?
Historically, reputable media organizations and governmental agencies were left to make these decisions, but increasingly today they fall to social platforms that may lack any detail of the event in question.
AI algorithms are exceptionally good at flagging violence and its components, but human reviewers are required to determine the all-important context of an image or video.
This context extends beyond the contents of the image or video and to the context of its distribution. A video documenting government forces firing on civilians might be deemed permissible for news outlets to distribute, but a pro-government citizen posting the video alongside commentary lauding the killings or simply promoting the violent elements of the video could be deemed a violation and removed.
To put it more simply, we have the tools today to automatically scan all of the livestreams across all of our social platforms as they are broadcast in realtime. Those tools are exceptionally capable of flagging a video the moment a weapon or gunfire or violence appears, pausing it from public view and referring it for immediate human review. A reviewer could choose to embargo the video until more is learned and then release it if deemed appropriate or delete or forward to law enforcement in realtime as the events are ongoing.
These tools exist and are sufficiently robust when coupled with human review to deploy today.
Yet, each time I’ve asked social media companies like Facebook about why they don’t use them, the answer has always been “no comment.”
Given the combination of GPS tagging and visual geocoding, many such videos could be geocoded in realtime and any depiction of weapons or violence near a crowd, in a public place or in the vicinity of a sensitive location could be immediately referred to law enforcement within seconds, likely saving many lives.
Combining automated realtime filtering of livestreams with a pause for human review the instant violence is depicted would balance the false positive rate of AI systems and the importance of context with human review.
Separate from content detection, once an image or video has been definitively identified, the process of deleting it from across a social platform and preventing its reupload is far simpler and less computationally intensive.
Content hashing refers to the creation of a digital signature that uniquely represents a given piece of content. Unlike strict hashes like CRC32 or MD5 that are designed to flag even a one-bit difference in a piece of content, the kinds of hashes used for content matching are fuzzier, designed to allow for a certain level of difference, such as an image being resized or skewed slightly.
Content matching has been widely deployed for years as the basis of how sites combat copyright infringement and child pornography. In both cases, large databases of signatures are regularly updated and any content uploaded to the platforms is scanned against these databases, with matches rejected and potentially referred to law enforcement.
While touting its counter-terrorism efforts, Facebook has long refused to release any real detail behind them, especially their false positive rates.
The company has acknowledged that its efforts to restrict imagery and video of terrorism are almost exclusively limited to matching against a database of just under 100,000 pieces of largely ISIS and Al Qaeda content.
Despite myriad other terror groups wreaking death and destruction across the world, Facebook has very narrowly focused on ISIS and Al Qaeda, likely due to pressure from Western governments, though it has declined to comment on why it does not emphasize other groups.
The kind of content signature matching used by the social media companies is itself extraordinarily robust, able to flag even trace amounts of the original content buried under an avalanche of other material. The problem is that social media companies typically tune the algorithms to minimize false positive matches, rather than tune them towards maximal matching, which can require greater human review.
Such algorithms can be used to identify clips of a fraction of a second long or even isolated frames from a video hidden in another video. They can be tuned to be fairly robust against substantial modification such as watermarking, skewing, screen capturing and counter-signature algorithms.
The problem is that once again, social platforms default to the lowest-computational-cost option. Signature matching algorithms can be tuned to provide wide latitude in matches, providing substantial robustness against modification. However, such matching moves from the realm of simple low-cost database lookups towards image similarity scoring, which is vastly more expensive and yields an increased false positive rate that requires additional human review.
Content signature tools coupled with AI are extraordinarily powerful. Tools today are able to recognize in a fraction of a second that a portion of a single frame of a given video was cropped and heavily modified and then digitally inserted into a corner of a photograph elsewhere on the web.
These tools are no longer science fiction, they are commercial reality.
It is remarkable, however, that even the combined audiovisual fingerprinting that is typically used for such robust content matching was not deployed until much later, with Facebook noting it only expanded to audio fingerprinting after witnessing the inevitable rise of screen-captured videos, which are a standard approach to defeat content matching algorithms that Facebook should have been well aware of. The company declined to comment on why its matching algorithms failed to catch reposts.
We have the tools today to pause a livestream the instant a weapon or violence is depicted and send it for human review, even prioritizing the video based on whether it appears to be filmed in a public place or near a sensitive location, preventing it from being viewed and alerting law enforcement in realtime.
For blacklisted content, we have algorithms that are extremely robust to modification and can flag even a small fragment of a piece of content that has been cropped and heavily modified to avoid detection.
The problem is that these approaches have substantial computational cost associated with them and when used in conjunction with human review to ensure maximal coverage, would eat into the companies’ profits.
The real issue is that there is a lack of incentive to remove such content.
Today Facebook faces a crisis of terrorism content across its platform. Yet, under the laws of most countries in which it does business, it faces few criminal or civil liabilities for the hosting and promotion of that content.
In contrast, if Facebook was fast on its way to becoming the new Napster, with every new Hollywood blockbuster uploaded for free viewing across its servers, it would face very real legal consequences and financial liabilities.
This is the reason that social platforms have been extremely aggressive at policing the use of their platforms for copyright violations or illegal content like child pornography.
Facebook realizes that it can’t tell Hollywood that there are millions of copies of every new release available for free on its servers but that it would simply cost too much for it to do anything about it. The company recognizes that this argument would simply not fly and so it devotes the necessary resources to combating copyright infringement, regardless of the cost.
In contrast, terrorism content is itself not actually illegal in most Western countries and so the company faces no similar legal restrictions on hosting the content.
In fact, perversely, the companies actually profit from terrorism, hate speech and genocide. When asked on different occasions whether Facebook would consider refunding the advertising revenue it earns on all of the posts, content views and engagements surrounding terrorism, hate and violent speech that it identifies and removes as a violation of its policies, the company has refused to comment each time.
To put it more bluntly, Facebook profits from horror, earning money from every atrocity that occurs as people from across the world come to its platform in the aftermath or even use it as a tool to commit those atrocities, so it has little incentive to remove it.
There are three major reasons social platforms don’t do more to combat horrific uses of their platforms.
The first is that any kind of content filtering costs money. We speak of computing power as being infinite in the web era, but AI algorithms require large amounts of expensive hardware to run on, while human reviewers cost even more. Companies are loathe to devote anything more than the absolute minimum to a task that does not contribute to their profit.
The second is that they have no incentive to remove atrocities from their platforms. Unlike copyrighted content and a few classes of illegal material, depictions of terrorism are not themselves illegal in most countries. Until US or EU law treats the publication of terrorism content in the same way it does copyright infringement, there is unlikely to be real change.
Indeed, it was just four years ago that Twitter famously rebuked Congress’ request for it to do more to combat terroristic use of its platform. Only after intense government pressure did it reverse its stance and now regularly touts its efforts to remove terrorism content. Similarly, after years of arguing it could simply do no more to combat hate speech, Facebook moved swiftly to build new technical tools and massively expand its moderation staff after Germany passed new laws governing social platforms’ responsibilities for hate speech.
In short, government intervention works.
Yet, it is the third reason that is perhaps the most important for why social platforms don’t do more to remove horrific content from their platforms: they profit from it. There is a reason companies like Facebook refuse to comment each time they are asked whether they would be willing to refund the money they earn from terrorism, hate speech or other violent or illegal use of their platforms. If Facebook was forced to refund the ad revenue it earned for every post it removes and perhaps even pay a meaningful fine to the government for each violating post, the content would no longer be a profit center, but rather something with very real liabilities, much as they remove the copyrighted content that would otherwise draw large amounts of visitors.
Predictably, both Facebook and Twitter declined to answer all of the questions posted to them.
The ability of the companies to simply refuse to answer any questions each time their platforms are used for harm is perhaps the most important reason of all that they don’t do anything more: they don’t have to. Each time an atrocity occurs, the platforms can simply issue a blanket apology and move on, without any fear of government intervention, legal or financial liability or user backlash.
Putting this all together, we have the technology today to do vastly better at removing horrific content from our social platforms. The tools exist to pause livestreams the instant weapons or violent content appears and to prevent even heavily modified or remixed versions of violent content from being shared. All of these tools exist, but we lack the will to deploy them, given their cost, the lack of legal incentives to combat such content and the fact that they generate real monetary revenue for social platforms.
In the end, until we address these three issues, we simply are not going to see any meaningful action from the social platforms in combating horrific misuse of their digital megaphones.
|
efaadcc9fc892fcb088648c1d58a114b | https://www.forbes.com/sites/kalevleetaru/2019/04/17/could-ai-create-a-super-wikipedia-from-all-of-recorded-history/ | Could AI Create A Super Wikipedia From All Of Recorded History? | Could AI Create A Super Wikipedia From All Of Recorded History?
What might it look like to use machines to fill in the gaps in our history? Visualizing 200 years of human history through the eyes of the English language edition of Wikipedia reminds us how much of the interdependence and connectivity among global events is missing the further back through history we look. In Wikipedia’s telling of history, events become more and more detached and disconnected the further back one goes, transitioning from a collection of largely isolated occurrences 200 years ago to a globalized world today in which almost everything is connected to everything else. This reflects the changing nature of how we record our history, but could AI help?
Perhaps the single most powerful element underlying Wikipedia’s phenomenal success is the way in has democratized the ability to contribute encyclopedic knowledge about the world in a centralized fashion. People all over the world can edit the platform’s pages to record new information about events all across the planet.
Yet, this adhoc crowdsourced model also comes with great a cost: the uneven nature of how information is recorded. Setting aside geographic, topical, demographic and other biases, the key concepts of reciprocity and connectivity are extremely difficult to maintain when users can contribute information in scattershot fashion across such a vast archive.
This results in what in 2012 I noted was the “one-way nature” of Wikipedia’s knowledge: if article A links to article B, it is not necessarily the case that article B will also include a link back to article A. Thus, the degree to which readers are exposed to the interdependent connectivity of events will depend heavily on the specific path they take to navigate the platform, rather than the reality of the information itself.
For example, take the English Wikipedia’s article about the village of Tajarhi in Libya. The article’s primary historical description of the village relies upon an 1819 description by British naval officer George Francis Lyon. Yet, Wikipedia’s article about Lyon makes no mention of the village, mentioning only that he visited the Murzuk District and provides no link to the Tajarhi article. A reader knowledgeable in Libyan geography might recognize that Tajarhi is based in the Murzuq District (and that the Murzuk District mentioned in the Lyon article is the same as the Murzuq District listed in the Tajarhi article), but to the casual reader there will be nothing to connect Lyon to Tajarhi.
Thus, a reader starting with the Tajarhi article will see the reference to Lyon and can choose to learn more about him. Yet, a reader starting with the Lyon article will never see that he had a connection to Tajarhi.
Even worse, the Tajarhi article’s mention of Lyon is merely a textual reference, rather than a clickable hyperlink. Readers have no indication that there is an entire biographical entry on him and are forced to keyword search for his name. Yet, keyword searching for the “George Lyon” mentioned in the article yields a list of nine people sharing that name, forcing the reader to guess that it is the second from the last name that is the correct reference.
In fact, this is an extremely common occurrence across Wikipedia, affecting a great deal of the platform’s connectivity between people, places, times and events.
This has especial importance for the data mining practice of using cross-article hyperlinks as a means of connecting articles and events. Algorithms that connect people and places using article hyperlinks miss much of the connectivity captured in the textual contents of Wikipedia’s articles.
This lack of codification extends to the “Infoboxes” that many services today rely upon to codify Wikipedia’s vast factual contents.
The Infobox for the American Civil War contains a vast wealth of data expressing key details about the conflict in machine-friendly format. Yet, the Barasa-Ubaidat War in Libya contains no such codified version of its contents. The degree to which an article's details have been extracted and codified depends entirely on the interest and experience of its authors in doing so.
Visualizing Wikipedia geographically can help with these connections, making the spatio-temporal connections among events readily apparent.
Yet, perhaps the greatest hope for fixing all of these issues is the use of machine processing to add contextual links, extract details and enrich Wikipedia’s archives through external sources.
For example, my own 2004 senior thesis of the history of the University of Illinois involved personally digitizing and acquiring more than 70,000 pages of digitized historical documents dating back more than 150 years to chronicle the institution’s evolution, which were made available in a vast digital library.
Using automated date extraction algorithms, more than 500,000 mentions of time were extracted from those 70,000 pages, creating a vast searchable database of events throughout the university’s history. This was then used to create a “Today in the History of the University of Illinois” feature that displays all mentions of the current day throughout the university’s history, allowing visitors to understand the significance of a given date through time.
Imagine using a similar approach to process the hundreds of millions of pages of history stretching back centuries that have been digitized by the major book scanning initiatives. Their collective knowledge of global events and narratives remain largely untapped, but machines could easily transform this vast chronicle of Planet Earth into an ultimate Wikipedia. Such an effort would require filtering out fictional works, judging the reliability of works and adjudicating conflicting information (or simply listing all of the reported options for a given detail), but would go a long way towards lending the kind of hyperconnected context Wikipedia enjoys for modern events to its historical archives.
This could also involve enriching Wikipedia’s holdings with realtime processing of the scholarly literature, governmental archives and other official sources to fill in its holes.
Putting this all together, one can only imagine what Wikipedia might look like if we used to machines to read through all of recorded history across all its forms spanning all the languages and places of the earth and summarize all of that into one single master encyclopedia spanning all of recorded knowledge.
In the end, perhaps a good first start would be to use algorithmic processing to make Wikipedia’s knowledge more bidirectional, helping readers navigate its vast wealth of knowledge, perhaps through an automated context box of some form that provides links to all of the other Wikipedia articles relevant to its contents and which link to it. Automated construction of Infoboxes and summarization of core details across language editions would also go a long way towards weaving Wikipedia’s archives together into a more holistic and approachable format.
Perhaps someday machines will help us tell the human story better than we ourselves could ever imagine.
|
ca89f6a49e2d88ba9139acb19351253d | https://www.forbes.com/sites/kalevleetaru/2019/04/21/facebook-demands-email-passwords-then-quietly-uploads-contact-lists-but-once-again-we-dont-care/ | Facebook Demands Email Passwords Then Quietly Uploads Contact Lists But Once Again We Don't Care | Facebook Demands Email Passwords Then Quietly Uploads Contact Lists But Once Again We Don't Care
Facebook logo. (JOEL SAGET/AFP/Getty Images) Getty
Just weeks after Facebook acknowledged it had been secretly storing its users’ passwords in cleartext on its servers where they had been accessed more than 9 million times by its employees, the Daily Beast reported earlier this month that the company had quietly begun requiring some users to verify their accounts by handing over the password to the email account they had used to create their Facebook profile. At the time, the outlet noted that “the company has recently been criticized for repurposing information it originally acquired for ‘security’ reasons.” It turns out this was exactly what happened, as Facebook logged into users’ outside email accounts and “unintentionally” silently uploaded a copy of their address book to its servers without their knowledge or consent, making off with more than 1.5 million people’s contact lists. The company has promised to delete the data but did not respond when asked to commit to a specific date by which it would agree to delete the illicitly obtained lists. How did we reach a point where a major company felt it acceptable to demand users hand over to it the passwords to their email accounts and then quietly harvested their contact lists?
Perhaps the most remarkable aspect of Facebook’s latest privacy scandal is not that it happened at all, but rather that it generated so little outrage and that none of us will actually leave the platform.
Looking globally, the story was barely a blip on the tech media radar. Certainly, it garnered a few headlines, but it was far from the kind of all-encompassing week-long deluge of outrage that was once associated with such massive privacy and security breaches.
It is almost unheard of in today’s cyber-conscious digital world for an online service to demand that its users hand over their passwords to outside services as sensitive as their email just for the right and privilege to use their product.
There are plenty of products that offer value-add to email and social media services and thus require permission to access those accounts on behalf of a user, but in those cases there is an explicit need for the access and the user understands why access is needed and explicitly grants that access.
More to the point, there is no major service today that requires users hand over their passwords to facilitate its access. In fact, with two-factor hardware authentication, password-based logins from one service into another are increasingly impossible.
Instead, services use streamlined authentication processes that never require outside services to ever access a user’s password and explicitly grant what privileges that service is granted to the user's data.
Facebook never actually needed access to a user’s email account for the purposes of authenticating them. Since the dawn of the modern web, services have authenticated users’ email accounts simply by sending them an email with a link to click or a code to copy-paste back into an authentication form to verify that they have access to the account in question.
Requiring users to fork over their password to verify that they have access to an email account is simply unheard of and an unprecedented breach of modern security practices.
Beyond confirming that the password request was indeed an officially sanctioned product and not the work of a rogue coder or an internal error, Facebook has remained silent on why it believed it was acceptable security practice to request users hand over their email passwords. The company unsurprisingly did not respond to a request for comment.
It turns out one of the reasons for needing this password is that Facebook was quietly logging into users’ email accounts and silently harvesting their contact lists, which it was uploading to its own servers.
When confronted with its activities, the company has remained largely silent about its actions other than to claim that the harvesting was “unintentional” and that it would be deleting the harvested lists at some point.
Yet, the company has declined to commit to a specific date by which it would delete the illicitly acquired data and it did not respond to a request for comment on why.
In fact, since users legally, if not willingly, consented to the harvesting, it is unclear whether the company actually faces any legal obligation to delete the data it acquired. It did not respond when asked whether there were any GDPR implications of the harvesting.
It is especially notable that for such a serious and consequential breach, the company did not offer a more forcible apology, coupled with a top-down external review of its security practices and did not promise to delete all of the acquired data within 24 hours.
To date, the company has either declined to comment or simply not responded at all to every request for comment as to whether it would commit to permitting an external independent third-party review of its entire security infrastructure and stance.
With breach after breach, it seems we know why: the company is too afraid of what such an audit would reveal.
Putting this all together, it is truly extraordinary that in 2019 Facebook would consider it acceptable security practice to demand that users hand over their passwords and permitted more than 1.5 million users’ address books to be harvested and has resisted requests to offer a timeline by which the illicitly acquired data would be deleted.
Facebook’s actions remind us just how cavalier of an approach it takes to safety and security.
In the end, however, it really doesn’t matter because Facebook has successfully convinced the public and policymakers not to care about their safety, security or privacy online.
Once again, we will just sit back and wait for the next revelation of another massive Facebook security breach but none of us will ever leave the platform no matter what it does to us.
|
1d3933851ff8583a7dd69051b9fa2c37 | https://www.forbes.com/sites/kalevleetaru/2019/04/24/what-happens-when-social-media-weighs-the-monetary-value-of-our-posts/ | What Happens When Social Media Weighs The Monetary 'Value' Of Our Posts? | What Happens When Social Media Weighs The Monetary 'Value' Of Our Posts?
The lifeblood of social media platforms are the flood of content provided free of charge by their users. In many respects, modern social media companies like Facebook and Twitter are merely internet hosting providers, offering free storage space for people to upload messages, music, podcasts, photographs, movies and all other imaginable forms of content. In return, users grant those free hosting providers the right to commercialize and monetize their creations and surrender their right to share in those profits. This free hosting business has become extraordinarily lucrative but as growth slows, a key question will be whether companies begin to curtail the kinds of content they permit.
Looking back, it is truly remarkable that the internet has developed in the way it has. Social media platforms, with their infinite free hosting platforms are particularly interesting in that unlike paid hosting providers, they have to date largely offered almost unlimited hosting at no monetary cost and have made no differentiation between the accounts that generate the most revenue and those that generate almost no revenue.
A viral celebrity posting daily livestreams watched by millions of people and archived for perpetuity consumes an immense amount of computing resources and network bandwidth. That cost is more than recovered through the sale of advertising, the resulting flood of responses that can all be monetized and the underlying behavioral and engagement data captured from all those users.
Celebrities have long been afforded free airtime and publicity in a synergistic relationship with broadcasters and publishers that yielded substantial financial benefits to both.
Social media has expanded this perk to everyone.
While a viral celebrity can yield substantial economic dividends for a social network, what about a reclusive user with no friends or followers who posts a never-ending stream of videos, photos, podcasts and other content consuming hundreds of gigabytes of storage per day but which are never viewed and thus never generate revenue?
To date social networks have never made a distinction between revenue-generating content and non-monetized material.
The viral celebrity and the recluse are both granted the same rights and privileges to post as much as they want.
Making matters worse, social platforms have operated as almost quasi digital archives, implicitly promising their users that they will continue to host everything their users uploaded for all eternity. Other than Snapshot’s explicitly self-destructing messages, no major platform automatically expires content after a fixed period of time to free up disk space.
Social platforms have largely avoided addressing their explosive disk consumption under the assumption that by conditioning society towards unlimited uploading, the steady stream of monetizable diamonds will outweigh the cost of all of the coal dust.
Eventually, however, it is almost inevitable that there will be a great reckoning, much as online retailers have increasingly curbed the sale of content whose shipping costs exceed their desired profit margins.
After all, why should Facebook host tens of terabytes of video for a user whose totality of uploads have generated only a few dozen views over a decade?
Moreover, as the total holdings of social platforms have grown so large that the companies struggle to help their users navigate it all, wouldn’t a smaller assortment of high-value content be better than an endless ocean of low-value posts?
What happens as machine learning content moderation reaches mainstream?
The extreme cost and limited scale of human moderation has strictly limited its application to enforcing terms of service violations involving objectionable content.
As automated moderation becomes tractable and companies begin automatically scanning each piece of content through these algorithms, why not add a few additional filters to each upload?
Instead of just determining whether a post is a terms of service violation and deleting it, why not estimate the post’s “total future monetizable value” (TFMV) and reject it if the algorithms don’t believe it will recoup its hosting costs within a reasonable time horizon?
In fact, it is remarkable that companies have not already begun implementing content expiration policies whereby large files like photos and videos would automatically expire five years after their last access.
Looking to the future, there is a very real possibility that companies will begin considering the economic value of posts before agreeing to host them for perpetuity.
Putting this all together, the social media revolution was based on the implicit agreement by social media companies to act as free web hosting providers that would preserve each uploaded file indefinitely and cover all of its bandwidth and storage costs for the life of the company’s existence, regardless of whether it actually generates any revenue.
In turn, the most popular viral content pays the hosting costs of less-accessed content.
If hosting providers automatically expired content that was no longer being accessed or even outright rejected content it did not see as generating meaningful traffic, the companies could refocus their services on the most profitable social posts, dramatically reducing their hosting costs while providing customers a much richer and more tailored experience that eliminates the deluge of low quality material.
In turn, our online experience would revolve around what advertisers want rather than the most pressing societal concerns, but that is already happening.
As social platforms move ever more aggressively towards AI content filtering, it is critical that as a society we come to terms with an algorithmically moderated world in which it becomes tractable for companies to value every individual post by its economic worth.
In the end, if freedom of speech was replaced with the freedom of economically valuable speech, it would silence a great swath of society.
Though, it seems we are already well on our way towards this Orwellian future.
|
7df5f6ebef3a2eaab9dee0f398811d8a | https://www.forbes.com/sites/kalevleetaru/2019/05/05/are-facebooks-disaster-maps-the-ultimate-government-surveillance-tool-in-disguise/ | Are Facebook's Disaster Maps The Ultimate Government Surveillance Tool In Disguise? | Are Facebook's Disaster Maps The Ultimate Government Surveillance Tool In Disguise?
Mural at Facebook headquarters in Menlo Park in May 2012. (ROBYN BECK/AFP/Getty Images) Getty
Facebook took the opportunity this week at F8 to tout its work on “data for good.” At first glance, its Disaster Maps seem like a rare positive application of Facebook’s extraordinarily detailed and Orwellian realtime archive of the global physical location and movements of its two billion users as they go about their daily lives. Yet, as governments become accustomed to turning to Facebook to map their citizens and understand their spatial patterns of life, troubling questions are raised about whether Facebook may become ever more the ultimate surveillance platform for governments across the world.
Facebook’s Disaster Maps represent a rare “feel good” application of its immense global surveillance empire, allowing governments to rapidly triage the civilian impact of a major disaster. Understanding where the remaining population in the aftermath of a disaster is situated and their likely patterns of movement are crucial to helping get the public out of harms way and provide necessary assistance. In particular, understanding the surrounding locations where people in the affected area are most likely to seek refuge, such as the homes of family and friends in nearby towns, can help authorities preposition response teams, while visualizing the outflow of civilians from a disaster zone and seeing where they are seeking shelter in realtime is a dream of disaster responders.
Mobile phone CDR records have proven immensely valuable in tracking population dispersal after natural disasters. However, unlike CDR records, which must typically be acquired in piecemeal fashion from multiple providers across every country of interest, Facebook is able to track the realtime location of its users globally in a single centralized database regardless of their cellular provider and even as they travel throughout the world. Facebook's global presence also means it is far more exposed to legal requests from foreign nations.
As governments increasingly rely upon population-scale displacement maps like Facebook’s to assist with disaster response, they will inevitably wish to deploy similar mapping to other kinds of emergencies, such as protests and eventually simply to map their populations in peacetime.
When a protest breaks out in downtown, chances are that most of those protesters are carrying their cellphones with them, posting photos and video to Facebook. In turn, Facebook has a realtime database of the actual names and identities of each person participating in that protest. The company’s facial recognition models can extend that identification process even further, by tying surveillance camera footage back to the identities of those careful enough not to carry their phones with them.
Under the laws of many countries, all it would take is a simple court order to force Facebook to turn over a list of every person who attended that protest and their actual realtime movements throughout the protest, along with the addresses of the friends, family and fellow protesters they visited and roomed with that evening.
Of course, Facebook doesn’t just have the realtime physical location of its users, it has perhaps the world’s richest behavioral archive of their interests and communications, meaning it can tie interests to locations.
A repressive government concerned about democracy activists could request that Facebook provide it a realtime map of all citizens its algorithms believe may be interested in democracy-related topics, including a heatmap of the most common places they frequent and the homes of family and friends they spend time at. They could even require Facebook to compile a list of every person the activists have physically met with simply by looking at what phones have appeared in close proximity to theirs in the context of what appears to be a meeting.
Similarly, a country in which being LGBT is illegal could not only require Facebook to compile a list of names and addresses of everyone in the country its algorithms believe are LGBT, but it could couple that information with those individuals' physical movements to identify social venues and private residences most frequented by LGBT individuals as well as the identities of those they interact most commonly with in private, even if they are not connected to those individuals online.
Unsurprisingly, the company did not respond when asked whether it has ever received a governmental request to map political events like protests, riots, coups and other social disturbances or whether it had ever been asked by a government to map the location of individuals matching certain advertising or behavioral selectors like being LGBT. The company also did not respond when asked what safeguards it had in place to attempt to prevent a government from using a lawful court order to forcibly compel Facebook to generate such non-disaster maps, especially of vulnerable populations in nations where their status could place them at great risk of physical harm or even death.
Of course, there is little Facebook can do to resist a lawful court order once it has demonstrated that it has the ability to construct population maps on demand for governmental use. Few courts would have required Facebook to build a national surveillance platform from scratch, but now that Facebook has demonstrated all of the necessary technology in its Disaster Maps, the company is but one court order away from deploying Protest Maps, Democracy Activist Maps, LGBT Citizen Maps and the like.
No technology is immune from being repurposed for evil. Yet, subtle technological considerations can make it more difficult for governments to abuse technologies for surveillance purposes.
Apple offers a perfect example of the kinds of safeguards a company can build into its products to dissuade governmental surveillance.
In contrast, Facebook’s unwillingness to talk about government misuse of its Disaster Maps and whether it even considered technological safeguards to mitigate such misappropriation of user data suggests the company did not consider it a defining design principle.
After all, why would it care about governments tracking their citizens when it itself would not deny using the same data to track journalists publishing unflattering stories about itself using insider sources or policymakers proposing legislation that might threaten its business models.
Putting this all together, Facebook’s Disaster Maps are at first glance a welcome societal good application of the company’s vast Orwellian archives of our daily physical movements. The reality, however, is that it is almost inevitable that governments will repurpose this platform into a realtime surveillance platform to monitor everything from lawful protests to democracy activists to minority groups.
Of course, the more cynical privacy advocates might ask whether this was the company’s intent all along, to create a feel-good platform that could offer substantial national security benefits to governments as those governments are increasingly considering greater regulation of the company. Perhaps if lawmakers see Facebook as a useful security partner they might be more willing to tolerate its grip over the informational landscape.
In the end, Facebook reminds us once again that it truly is the ultimate government surveillance tool.
|
9358c3b1b84fced715b31abfeec7d3a3 | https://www.forbes.com/sites/kalevleetaru/2019/05/06/as-orwells-1984-turns-70-it-predicted-much-of-todays-surveillance-society/ | As Orwell's 1984 Turns 70 It Predicted Much Of Today's Surveillance Society | As Orwell's 1984 Turns 70 It Predicted Much Of Today's Surveillance Society
George Orwell's Nineteen Eighty-Four. (Justin Sullivan/Getty Images) Getty
George Orwell’s famous novel Nineteen Eighty-Four turns 70 years old next month. Looking back on its predictions and the state of the world today, how much did it get right in its predictions of a dystopian surveillance state where every word is monitored, unacceptable speech is deleted, history is rewritten or deleted altogether and individuals can become “unpersons” for holding views disliked by those in power? It turns out Orwell’s predictions were frighteningly accurate.
In 1984, it was the state that determined what constituted acceptable speech in keeping society orderly.
In 2019, it is a small cadre of private companies in Silicon Valley and their executives that wield absolute power over what we are permitted to see and say online.
In 1984, there were just a few countries to which most of the world’s citizens belonged.
In 2019, there are just a few social media empires to which most of the world’s netizens belong.
In 1984, it was the state that conducted surveillance and censored speech.
In 2019, social media companies deploy vast armies of human and algorithmic moderators that surveil their users 24/7, flagging those that commit thoughtcrimes and deleting their violations from existence. Those that commit too many thoughtcrimes are banished to “unperson” status by these same private companies, without any intervention or even in contradiction with the will of the state and without any right to appeal.
In 1984, those who committed particularly egregious thoughtcrimes or had histories of them were banished into nonexistence, all traces of them deleted.
In 2019, social media companies can ban anyone at any time for any reason. Those banished from social’s walled gardens can have every post they’ve ever written wiped away, every record of their existence banished into the memory hole. Those that dare to mention the name of the digitally departed or criticize their banishment can themselves face being banished and their concerns deleted, ensuring the “unperson” truly ceases to exist.
In 1984, the government constantly rewrites and deletes history that has become inconvenient.
In 2019, governments quietly rewrite press releases to remove past statements that proved wrong or to add statements to support their present assertions. Meanwhile the European Union’s “Right to be Forgotten” grants ordinary citizens the ability to wipe clean society’s memories of their past, allowing them to be “reborn” without the burden of their past transgressions.
In 1984, ever-present “telescreens” act as both information conveyor and surveillance device and saturate both public and private spaces with cameras and microphones monitored by the government.
In 2019, smartphones take on this role, acting as both our window to the digital world and the means through which myriad private companies from data brokers to social media companies themselves surveil our every action. Yet, our world goes far beyond the one imagined by Orwell in which every device from our watches to our refrigerators, our thermostats to our toasters, are increasingly Internet-connected and streaming a realtime documentary of our lives back to these private surveillance empires.
In 1984, it was the state that made use of its vast surveillance empire to maintain order.
In 2019, a landscape of private companies so large it is almost uncountable, monitors, monetizes and manipulates us.
In 1984, the government uses its surveillance state to nudge each member of its citizenry towards a desired state.
In 2019, private companies do the same, building up vast behavioral and interest profiles on each individual user that they then use to nudge them towards the most monetizable behaviors.
In 1984, the government funded the vast empire of equipment and personnel needed to maintain constant surveillance of its citizens.
In 2019, the public themselves fund the great surveillance empire that monitors, monetizes and manipulates them. Citizens purchase the latest digital devices, upgrade and maintain them at regular intervals, pay for the power and internet services needed to connect them and grant unlimited rights to their most intimate information to private companies.
In 1984, the ultimate goal of the massive surveillance empire is to sustain and entrench the power of the state.
In 2019, the ultimate goal of the online world’s massive surveillance empire is to sustain and entrench the power of social media companies.
Indeed, the similarities are nearly as endless as the words of the book.
Putting this all together, 70 years after 1984’s publication, it seems nearly every aspect of Orwell’s commentary on the surveillance state has come true. The only difference is that Orwell saw surveillance and control as the domain of the state, whereas in reality the surveillance world we have come to know is one of private companies monitoring, monetizing and manipulating society for nothing more than commercial gain.
In the end, as we rush towards an ever more Orwellian world of surveillance and censorship, perhaps we might all take the time to reread 1984 in order to better understand the world we are rushing towards.
|
09c89362f891cea81d855e236d3f939b | https://www.forbes.com/sites/kalevleetaru/2019/05/07/data-isnt-truth/ | Data Isn't 'Truth' | Data Isn't 'Truth'
It has become perhaps the most important guiding principle of today’s world of data science: “data is truth.” The statisticians, programmers and machine learning experts that acquire and analyze the vast oceans of data that power modern society are seen as uncovering undeniable underlying “truths” about human society through the power of unbiased data and unerring algorithms. Unfortunately, data scientists themselves too often conflate their work with the search for truth and fail to ask whether the data they are analyzing can actually answer the questions they ask of it. Why can’t data scientists be more like those of the physical sciences that see not “universal truths” but rather “current consensus understanding?”
Given the sheer density of statisticians in the data sciences, it is remarkable how poorly the field adheres to statistical best practices like normalization and characterizing data before analyzing it. Programmers in the data sciences, too, tend to lack the deep numerical methods and scientific computing backgrounds of their predecessors, making them dangerously unaware of the myriad traps that await numerically-intensive codes.
Most importantly, however, somewhere along the way data science became about pursuing “truth” rather than “evidence.”
We see piles of numbers as containing indisputable facts rather than merely a given constructed reality capturing one possible interpretation.
In contrast, the hard sciences are about running experiments to collect evidence, building theories to describe that evidence and arriving at temporary consensus, together with the willingness to allow today’s understanding to be readily upended by new evidence or descriptive theories.
Most importantly, all evidence in the hard sciences is treated as suspect and tainted by the conditions of its collection, requiring triangulation and replication. This is in marked opposition to the data sciences' habit of relying on single datasets and failing to run even the most basic of characterization tests.
In the sciences, all knowledge is accepted to be temporary, based on the limitations of experimentation, simulation and current theories. Experiments are run to gather evidence to either confirm or contradict current theories. In turn, theories are adjusted to fit the current available evidence. Experiments that appear to strongly contradict existing understanding are subjected to extensive replication until the preponderance of evidence leaves no other available conclusion but that current theory must be amended to account for this new information.
Even basic “laws” are viewed not as dogmatic undisputed truth, but rather evidentiary understanding that has withstood all attempts to refute it, but which may eventually be replaced by new knowledge.
The hard sciences are replete with disagreements, novel experiments that contradict existing theories and competing theories without an obvious winner. Yet, physicists and chemists do not speak of “truth” and “fiction,” they speak work to gather evidence on behalf or against each possible explanation.
Most importantly, the hard sciences balance available evidence gathered through experimentation with designing new experimentation to gather currently unavailable evidence with theory to explain it all.
In contrast, data science has increasingly become about making use of the easiest obtainable data, not the data that best answers the question at hand.
In fact, much of the bias of deep learning comes from the reliance of the AI community on free data rather than paying to create minimally biased data.
Much like deep learning, the broader world of data science has been marred by its fixation on free data, rather than the best data. Look across the output of any major company’s data science division and one will find that most of their analyses are based on whatever data the company already has at hand or can obtain freely from the web or cheaply from vendors or itself.
Few companies step back to ask what the best data for any given question would be and have sufficient resources and budget to create that dataset. Instead, data science divisions are typically asked to produce ever more analyses ever faster and ever more efficiently with ever fewer resources per analysis.
Rather than spend months commissioning and executing a methodologically sound survey instrument to collect consumer feedback about a new product, the modern data sciences division is far more likely to turn to what it knows: running a few keyword searches on Twitter and pasting the resulting graphs into a PDF report.
In fact, few data scientists are even familiar with survey design, let alone understand that keyword searching tweets may yield a result that bears little resemblance to reality.
The hard sciences attempt to find new ways of analyzing existing data to answer the field’s questions but are constantly designing new experiments to gather new data. Data science has become more about analyzing existing data and trying to find some way of connecting it back to the given question. Creating new datasets is typically viewed as out of scope.
Yet, quantitative analysis lends the aura of “truth” organically emerging from massive piles of data examined by unerring machines.
All of the badly biased data, flawed algorithms, random seeds, wildly wrong estimations, confirmation bias and myriad other damaging influences on our results are hidden from the ultimate consumers of those results by the mesmerizing “Apple effect” of beautiful graphs that convey hard certainty in what may amount to nothing more than random guesses based on wrong data analyzed by flawed algorithms with bad parameters.
None of this matters, however, because data science has become about lending false credibility to decisions we’ve already made, rather than seeking out what our data tells us.
In short, we search our data until we can find evidence to support our preordained conclusions, wrapping them in the false security of “big data" and the assumption that from large enough data emerges indisputable "truth."
Putting this all together, as data science matures it must become far more like the hard sciences, especially a willingness to expend the resources to collect new data and ask the hard questions, rather than its current usage of merely lending a veneer of credibility to preordained conclusions.
We must recognize that much like a photograph constructs reality rather than capturing truth, so too is any given dataset merely one possible lens through which to observe the world.
In the end, data is not the same as “truth.”
|
a0765afa7a86bcc25ba1aab219a7bd9d | https://www.forbes.com/sites/kalevleetaru/2019/05/28/facebook-is-already-working-towards-germanys-end-to-end-encryption-backdoor-vision/?utm_source=amerika.org | Facebook Is Already Working Towards Germany's End-to-End Encryption Backdoor Vision | Facebook Is Already Working Towards Germany's End-to-End Encryption Backdoor Vision
Speigel Online reported last week on comments by Germany’s Interior Minister Horst Seehofer proposing greater governmental access to end-to-end encrypted communications, such as those by WhatsApp and Telegram. While his comments represent merely one lawmaker's thoughts and the encryption community has vehemently objected to encryption backdoors and client application access, the reality is that at its annual conference earlier this month, Facebook previewed all of the necessary infrastructure to make Germany’s vision a reality and even alluded to the very issue of how Facebook’s own business needs present it with the need to be able to covertly access content directly from users’ devices that have been protected through end-to-end encryption. Could Germany’s backdoor vision be closer that we might imagine?
I have long suggested that the encryption debate would not be ended by forced vulnerabilities in the underlying communications plumbing but rather by monitoring on the client side and that the catalyst would be not governmental demands but rather the needs of companies themselves to continue their targeted advertisements, harvest training data for deep learning and combat terroristic speech and other misuse of their platforms. Moreover, merely breaking encryption would not offer nearly as many opportunities for mass societal-scale surveillance as monitoring on the edge.
While it was little noticed at the time, Facebook’s presentation on its work towards moving AI-powered content moderation from its data centers directly onto users’ phones presents a perfect blueprint for Seehofer's vision.
Touting the importance of edge content moderation, Facebook specifically cited the need to be able to scan the unencrypted contents of users’ messages in an end-to-end encrypted environment to prevent them from being able to share content that deviated from Facebook’s acceptable speech guidelines.
This would actually allow a government like Germany to proactively prevent unauthorized speech before it is ever uttered, by using court orders to force Facebook to expand its censorship list for German users of its platform.
Even more worryingly, Facebook’s presentation alluded to the company’s need to covertly harvest unencrypted illicit messages from users’ devices without their knowledge and before the content has been encrypted or after it has been decrypted, using the client application itself to access the encrypted-in-transit content.
While it stopped short of saying it was actively building such a backdoor, the company noted that when edge content moderation flagged a post in an end-to-end encrypted conversation as a violation, the company needed to be able to access the unencrypted contents to further train its algorithms, which would likely require transmitting an unencrypted copy from the user’s device directly to Facebook without their approval.
Could this be the solution Germany has been searching for?
While Facebook’s presentation reflected preliminary research rather than a production finished product, all of the necessary pieces of Germany’s desired surveillance platform are there.
In fact, by enabling the proactive censorship of speech before it is ever uttered, Facebook’s platform would actually go beyond the country’s wildest dreams.
Putting this all together, Facebook’s push towards content moderation on the edge is likely to have significant unintended consequences. By raising the specter of on-device content scanning for disallowed speech inside of end-to-end encrypted conversations and in particular sparking the idea of being able to silently harvest those decrypted conversations on the client side, Facebook is inadvertently telegraphing to anti-encryption governments that there are ways to bypass encryption while also bypassing the encryption debate.
In the end, it is almost a certainty that the days of being able to securely converse through end-to-end encryption are coming to a close as companies move their censorship and data harvesting and analysis to the edge.
|
1c5786bdbd17160531c80097317111d1 | https://www.forbes.com/sites/kalevleetaru/2019/06/04/will-increasing-government-censorship-lead-to-a-fragmented-web/ | Will Increasing Government Censorship Lead To A Fragmented Web? | Will Increasing Government Censorship Lead To A Fragmented Web?
As governments across the world increasingly seek to extend their reach to control what is said and seen online, the idea of governments actively censoring the Web has become normalized. While free speech advocates once condemned government intrusion into online communications, those same organizations and activists are increasingly cheering the idea of governments constraining digital speech in order to prevent speech they themselves disagree with. Emboldened by this growing support, governments are increasingly looking beyond their own borders to censor speech globally. As these trends collide, the inevitable result will be a fragmented Web.
In the beginning of the modern Web, its decentralized nature meant there were no global rules on what could be said or seen within its digital borders. Each country developed its own locally enforced rules reflecting its distinct cultural and political necessities.
In turn, the rise of social media created centralized walled gardens that transcended geographic borders and led the to the rise of parallel digital states momentarily unbeholden to physical governments.
This brief era of internet freedom meant Silicon Valley could forcibly export American ideals of free speech to the world, giving voice to those living under repressive regimes and autocratic democracies.
Predictably, this brief moment did not last as governments moved swiftly to silence nascent dissent and prevent the freedoms of this new digital medium from interfering with their control over the informational landscape.
The early decentralized era of the Web meant that the repressions of one government had no impact on the speech of another. In the centralized Web of social media, all speech worldwide suddenly had to be acceptable to every other government.
The centralized platforms that once forced free speech upon the governments of the world now found themselves on a race to the bottom to ensure that the speech of every individual worldwide was acceptable to every government worldwide.
In short, rather than bringing free speech to the world, social media stripped those freedoms away from the places they were formerly sacrosanct, ushering in their place censorship and repression.
Despite these downward pressures there have still been international norms and legal understandings that have to date constrained governments from enforcing their speech restrictions on other countries.
Increasingly, however, it is the world’s democracies that are leading the way towards the idea that any government anywhere should have the right to enforce its censorship on the citizens of other countries.
In the EU’s view, any EU lawmaker should have the right to silence criticism of their official governmental actions globally, preventing even American citizens from raising questions about their governance.
If the EU succeeds in these efforts, it will be only a matter of time before the Chinese and Russian governments demand similar concessions.
As more and more governments demand their own right to censor global speech, these censorship demands will become mutually exclusive. Given their global presence, social media companies will come under pressure to enforce irreconcilable legal orders.
Centralized social platforms cannot exist in a world in which one government can leverage a platform’s existence within its borders to force it to censor speech within the borders of another sovereign nation.
The end result of this trend will necessarily be the fragmentation of the Web itself, with social platforms forced to return to the early decentralized days of the Web in which each government could set only the terms for its own citizenry.
This could be achieved through the breakup of social platforms into country-specific platforms that utilize a shared set of protocols and informational exchanges, much like the TCP and HTTP underpinnings of the Web itself. This would prevent governments from having the legal leverage to force global bans, since each franchise would legally exist only within their own borders.
Alternatively, social platforms could attempt to create a similar arrangement through technical means, in which each country would have a specific set of unique censorship rules that control what its citizens see and say without impacting any other. This would replace the current practice of global rules. However, this would be unlikely to satisfy increasing calls from governments for the right to censor speech globally.
Putting this all together, the growing calls for global censorship will increasingly force an international reckoning over sovereignty in the digital age. The most likely outcome is that free speech will give way to the demands of governments, reminding us that for all its once-promised power to transcend geography, even the Web is still based on physical servers residing on sovereign soil.
In the end, the casualty will be free speech.
|
65ebec355b3c9d6bcc86b6c3e37361c5 | https://www.forbes.com/sites/kalevleetaru/2019/06/25/is-there-such-a-thing-as-objective-truth-in-data-or-is-it-all-in-the-eye-of-the-beholder/ | Is There Such A Thing As Objective Truth In Data Or Is It All In The Eye Of The Beholder? | Is There Such A Thing As Objective Truth In Data Or Is It All In The Eye Of The Beholder?
From data science to fact checking we talk today about the pursuit of objective and indisputable “truth.” Whether through the dogged work of human fact checkers chasing down references and reviewing evidence to combat social media falsehoods or algorithms wielded by data scientists churning petabytes of data into solitary answers, our modern world is premised upon the idea that there is “truth” that can be found through enough analysis of data. Yet as data science increasingly reinserts human judgement over algorithmic purity and as the digital ecosystem reverses from its brief objective experiment back to its subjective past, is it time to accept that there is no such thing as objective truth?
Nowhere is the idea of “truth” more apparent than in the world of “big data.”
The commercial world’s fascination with big data stems from the myth that from sufficiently large volumes of data, the mathematical purity of massive software algorithms can discover the hidden patterns of our lives.
This is of course a mythical falsehood.
The reality is that any dataset can be sufficiently filtered to yield any answer. There is no truth, merely the choice of a data scientist of which reality from myriad possible realities to select.
As companies first became aware of the enormous bias issues of their datasets and the impacts of these biases on everything from model training to algorithmic outcomes, they developed workflows that normalized the idea of adjusting datasets and algorithms until they performed as desired.
In turn, as the idea of manually intervening in datasets and algorithmic configurations to achieve a desired outcome became normalized, companies moved beyond helping their algorithms better capture the real world to trying to elevate their algorithms above the real world to a utopia defined by their corporate values. Rather than build algorithms that mirror the biased world in which they exist, why not build algorithms that reflect the idealized world that a company wishes it could create?
Scientists have sought since time immemorial to make order out of chaos by harnessing data, mathematics and human creativity to make sense of our complex world. The general public, however, was once far less interested in the idea of “truth.”
It is remarkable to realize that the general public’s obsession with the search for “truth” and objectivity in their understanding of the world is a relatively new construct, dating back to the post-WWII era.
In the United States, the news media was “born partisan and remained, for much of its history, loud, boisterous and combative.” The public did not turn to the news for cold objective evidentiary reporting. There was no conception of the idea of “truth” based in “data.” Instead, it turned to the news to see a biased reflection of the world that matched its own distinctive worldview. Each person would see the world through a distinct lens that reflected their own conceptions, understanding, narratives and beliefs. The search for “truth” was the search for “those who share my views.”
It is only in the last three quarters of a century that the public has conceptualized of objective information and the idea of being presented evidence and left to make their own decisions. Suddenly every citizen became a scientist, presented an array of observations and evidence and left to make their own decisions about what this all meant. Yet in the absence of information literacy, few possessed the experience and expertise to navigate these complex new informational waters.
In turn, data scientists have become the new citizen scientists, tasked with weaving piles of evidence into stories but all-too-often lacking the domain expertise and informational literacy to see beyond their lines of code to the actual reality in which their questions exist.
It is a decidedly modern ideal that there could be such a thing as objective information in which a reporter’s job is merely to present the facts and let readers make up their minds.
In many ways the return of the digital world to a world of opinions and disputed falsehoods merely represents a return to journalism as it was, representing a growing end to the brief 75-year experiment of objectivity.
In a similar fashion, the early data-driven revolution introduced the idea of giving voice to data. Of putting aside preordained beliefs and human intuition and placing data front and center of the decision-making process.
Over time, as companies came to understand just how biased their datasets were and the unfortunate truth that a biased world leads to biased data that may not comport with corporate values, they too returned to the past world of human judgement, representing a rapid conclusion to the brief experiment of actually listening to data.
This raises the question of what precisely the role of the data scientist or fact checker is today?
Is data science still about giving voice to data or is it about coaching our data until it says the “right” thing?
If there is no longer an objective truth to be uncovered in our data or if we are no longer interested in listening to the voice of data that may tell us uncomfortable truths, what is the point of even turning to data?
In similar fashion, as we think about the falsehoods that permeate the digital world today, as news returns to its subjective roots, is the idea of using fact checking to identify “truth” something that is even relevant to such a model?
In particular, as social media connects the world together, creating an ever-more diverse cyber citizenry, is even the idea of a digital falsehood something that no longer has meaning in a world where so much of what we call “truth” is defined by our worldviews?
What about the world of data science, in which any conclusion can be supported with the right filtering of the right dataset? Is there such a thing as “truth” in a world in which mathematically correct algorithms can be applied to statistically sound datasets to produce polar opposite narratives?
Putting this all together, is there such a thing as objective truth? Or is the biggest lesson of the digital era that “truth” is merely in the eye of the beholder?
Just as journalism has changed in the digital era to return to its subjective roots from its brief objective experiment, so too has data science reinserted human judgement, replacing its once-sacrosanct ideal that data alone represented truth.
In the end, perhaps we should take solace in the fact that we aren’t heading in the unknown.
We are merely returning to our past.
|
6316e5a28bc3da611496e08b0f587cf8 | https://www.forbes.com/sites/kalevleetaru/2019/07/07/a-reminder-that-fake-news-is-an-information-literacy-problem-not-a-technology-problem/?fbclid=IwAR0oZl6bocVF8WC_BZ8ubKXkaX5ipJWHovSY1PBx7RohQfFKKkP9ZLOANYw | A Reminder That 'Fake News' Is An Information Literacy Problem - Not A Technology Problem | A Reminder That 'Fake News' Is An Information Literacy Problem - Not A Technology Problem
Beneath the spread of all “fake news,” misinformation, disinformation, digital falsehoods and foreign influence lies society’s failure to teach its citizenry information literacy: how to think critically about the deluge of information that confronts them in our modern digital age. Instead, society has prioritized speed over accuracy, sharing over reading, commenting over understanding. Children are taught to regurgitate what others tell them and to rely on digital assistants to curate the world rather than learn to navigate the informational landscape on their own. Schools no longer teach source triangulation, conflict arbitration, separating fact from opinion, citation chaining, conducting research or even the basic concept of verification and validation. In short, we’ve stopped teaching society how to think about information, leaving our citizenry adrift in the digital wilderness increasingly saturated with falsehoods without so much as a compass or map to help them find their way to safety. The solution is to teach the world's citizenry the basics of information literacy.
It is the accepted truth of Silicon Valley that every problem has a technological solution.
Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.
Sadly for the Valley’s technological determinists, this is far from the truth.
Unfortunately, this mindset has survived to drive today’s “fake news” efforts.
Rather than invest in information literacy, the Valley has doubled down on technological solutions to combating digital falsehoods, focusing on harnessing legions of “fact checkers” and turning to Website and content blacklists, algorithmic tweaks and other quick fixes that have done little to turn the tide.
The problem is that technology can only mitigate the symptoms, it cannot address the underlying cause of digital falsehoods: our susceptibility to blindly believing what we read on the Web and our failure to verify and validate information before we share or act upon it.
Why is it that a teenager in their parent’s basement halfway across the world can anonymously post a statement to social media falsely attributed to a head of state and have that commentary go viral, spread to the mainstream press and even influence international political debate without anyone stopping to ask whether there is a shred of truth to what they are reading?
How is it possible that the nation’s most prestigious scholars and scientists at preeminent research institutions and universities could all suspend their disbelief and blindly believe that an anonymous Twitter account claiming to be a secret society “resisting” their government was everything it claimed to be without the slightest bit of verification? For all our societal chuckling about those who fall for “Nigerian prince” email scams, all it took was a couple of anonymous Twitter accounts claiming to be fellow researchers to start freely fundraising from the nation’s most respected researchers who never stopped to ask whether any of this seemed in the slightest bit suspicious.
In the early days of the Web societies taught their citizenry not to believe everything they read online, to treat every statement as suspect and not to act upon or share information without verifying it. Today those same societies place enormous pressure on their citizens to believe everything they see on the Web at face value and to share it as widely as they can as quickly as they can, rejecting any contradictory information they might stumble across in the process.
The old adage “Don’t believe everything you read on the Internet” has become “Believe everything on the Web and share it widely.”
Even digital natives who have grown up in the information-saturated online world do no better at discerning the credibility of information or even understanding the most basic concepts of separating paid advertising from objective journalistic reporting.
Suggestions like requiring programming and data science courses in school would certainly create more technically-literate citizens, but this is not the same as data literacy and the kind of critical thinking it requires. The ability to write computer code does not magically make someone more resistant to digital falsehoods just as learning a new human language does little to teach someone how to perform digital triangulation.
Technical literacy is a powerful and important skill in our increasingly technology-driven society but is not the same as information literacy and will not help in the war against “fake news.”
Algorithms can help citizens sort through the deluge of information around them, identifying contested narratives and disputed facts, but technology alone is not a panacea. There is no magical algorithm that can eliminate all false and misleading information online.
To truly solve the issue of “fake news” we must blend technological assistance with teaching our citizens to be literate consumers of the world around them.
Societies must teach their children from a young age how to perform research, understand sourcing, triangulate information, triage contested narratives and recognize the importance of where information comes from, not just what it says.
In short, we must teach all of our citizens how to be researchers and scientists when it comes to consuming information.
Most importantly, we must emphasize verification and validation over virality and velocity.
Unfortunately, all of these concepts are directly antithetical to our modern social media world in which speed and virality grant fame and fortune, while due diligence and verification yield either silence or a deluge of hate speech from those who false narratives are countered.
Putting this all together, solving the epidemic of digital falsehoods cannot be done through technology alone. No magical algorithm will rid the Web of its false and misleading narratives nor will teaching the public to program have any impact on their ability to discern truth from fiction.
Instead, today’s grand challenge of combating “fake news,” misinformation, disinformation, digital falsehoods and foreign influence requires a very human solution. It requires teaching society’s citizenry the basics of information literacy and how to think about the information they consume.
Most importantly, it will require navigating the existential contradictions of today’s social media platforms obsessed with velocity and virality against the verification and validation that form the basis of information literacy.
A more information literate society would likely bring with it considerable economic harm to today’s viral-obsessed social platforms that thrive on digital falsehoods, meaning there will be considerable resistance from Silicon Valley to a more information literate society.
In the end, the only way to truly begin to combat the spread of digital falsehoods is to understand that they represent a societal rather than a technological issue and to return to the early days of the Web when we taught society to question what they read online.
|
5693460ccf4600f3321045aebd84e34b | https://www.forbes.com/sites/kalevleetaru/2019/07/08/as-workers-are-increasingly-treated-like-robots-where-will-the-breaking-point-be/ | As Workers Are Increasingly Treated Like Robots Where Will The Breaking Point Be? | As Workers Are Increasingly Treated Like Robots Where Will The Breaking Point Be?
Workers today are increasingly being treated like robots. The data revolution that has led to unprecedented efficiency in companies’ digital operations is now being applied to their human workforces. From clocking bathroom breaks to measuring keywords and mouse clicks, from measuring customer service employees’ vocal stress to monitoring real-time movements of their employees, companies are increasingly treating their employees like disposable robots that can be steadily nudged towards ever-greater efficiency before being run into the ground and discarded for a newer replacement. Where will the breaking point be?
The same digital surveillance state that monitors us on the Web has increasingly come for the workforce. In the place of monitoring, moderating, mining and manipulating us for monetization, companies are instituting Orwellian surveillance workplaces designed to eek every ounce of productivity possible out of their workers, leaving them utterly exhausted and in a constant state of fear of being replaced.
The gig economy has been particularly aggressive in utilizing gamification and behavioral profiling to push its workers to their breaking points, manipulating them into placing the company’s profits ahead of their own self-interests.
Warehouse workers can now be tracked in real-time, utilizing digital surveillance not only to measure how many boxes they pack or items they pick, but how many hand movements it takes them to do so, allowing companies to extract efficiencies even from the physical movements of the human body.
Even office workers are no longer immune, with companies tracking how many keystrokes they type, how many times they use the restroom and who they spend their time talking with. Many Silicon Valley companies locate restrooms, break areas and other spaces such that workers need to pass through keycarded doors, ensuring companies know where they are at any moment. Many utilize phone-based beacons to maintain real-time logs of where every employee is at any moment and how they spend their days. Even some government agencies are now turning to such location tracking to keep tabs not just on how much time employees are spending at their desks but who they meet with throughout the day, identifying employees who are spending too much time with people outside their groups.
To be fair, none of these practices is new. Companies have long strived to wring efficiency out of their operations and “efficiency managers” were all the rage three-quarters of a century ago as companies leveraged then-novel tabulating processes and system research to view their employees more as robots than people.
Digital technology has merely accelerated this trend and made it possible to track an organization’s entire employee footprint in real-time at unprecedented resolution.
More importantly, it has made it possible for algorithms to manage employees, eliminating compassion and common sense and focusing exclusively on profit.
This management mechanization has coincided with an economic period in which companies have had the upper hand over their employees, allowing them almost unfettered freedom to roll out these new practices without fear.
As companies treat their workers more like robots, it removes the serendipitous discovery and creativity that emerges from having the time to think and from interacting with others across the company.
Companies once encouraged their workers to interact, creating common shared spaces and in some cases even scattering divisions to ensure they mingled with other groups to seed new ideas. Creativity was encouraged and many companies offered programs that allowed workers to test out new ideas with the company’s blessing.
Instead, companies today are increasingly heading in the opposite direction, returning to an era in which employees were seen as worker bees expected to sit in their cubicles or work spaces and perform their assigned tasks with maximum efficiency, wasting no precious time on thought or collaboration.
This has a chilling effect on creativity. It is no coincidence that many of the same companies that have so openly embraced surveillance management are those that have struggled in recent years to innovate.
Where will the breaking point be? At what point will employees say enough is enough and rebel?
Could it be that in a globalized economy in which companies can simply shift jobs anywhere in the world, that worker rights and privacy are destined to be nothing more than a quaint memory of a bygone era? Or as the economy improves, will companies be forced to scale back their efficiency ambitions?
Putting this all together, companies are increasingly leveraging the same surveillance state that tracks us online to monitor us in the workplace, turning to algorithms to watch their workforces in real-time and constantly nudge their employees towards maximal efficiency at the expense of creativity and innovation.
In the end, perhaps we really should look forward to robots taking our jobs.
|
80a1a17531d7e82972feffe2dc3114f1 | https://www.forbes.com/sites/kalevleetaru/2019/08/05/computer-science-could-learn-a-lot-from-library-and-information-science/ | Computer Science Could Learn A Lot From Library And Information Science | Computer Science Could Learn A Lot From Library And Information Science
Computer science curriculums have long emphasized the power of data, encouraging its harvesting and hoarding, pioneering new ways of mining and manipulating users through it, reinforcing it as the path to riches in the modern economy and proselytizing the idea of data being able to solve all of society’s ills. In contrast, library and information science curriculums have historically emphasized privacy, civil liberties and community impact, blending discussion of public data management with private data minimization. Tomorrow’s future technology leaders could learn much from their library-minded colleagues.
As a young computer science student at what was then the #4-ranked computer science program in the nation (today #5), my coursework was filled with all manner of practice and theory on how to acquire, manage and mine the world’s largest datasets.
The focus was on capability, of what "could" be done with data, rather than what "should" be done with data. The idea that a technical achievement should be avoided because it might harm society was never even whispered. The idea that data should be minimized to protect privacy was not even a concept. Secure systems design emphasized how to safeguard data from unauthorized access, but never the concept of how to safeguard the users whose data that was from harm.
Never once was the concept of an Institutional Review Board or the concept of assessing the societal harm of research ever presented, even while security and architectural review boards were a topic of regular discussion.
In contrast, as a doctoral graduate student at the university’s library and information science (LIS) program just a few blocks away, it was like entering an entirely new world.
The concept of societal harm was brought to the forefront the very first semester, emphasizing the idea of avoiding promising research that could have cause significant harm to vulnerable communities, the idea of IRB review of research and the notion that even publicly downloadable data like social media datasets still required a full consideration of risks and a complete ethical review.
Algorithms were no longer piles of code, they were a compilation of human assumptions, priorities, worldviews and biases that guided the creation of that particular algorithm rather than one very similar, even if the actual code itself was produced through machine learning. Indeed, these are concepts still absent from computer science today, where models are described as “unbiased” replacements for biased programmers.
Data security was no longer technology-centric “cybersecurity” but rather user-centric “privacy” in which safeguarding data meant safeguarding a user from harm, not merely locking down a server.
Even beyond the concepts of privacy and data, there is a vast world LIS programs can teach computer scientists about how to think about their users.
Front and center in the LIS world is the study of how individuals seek out, consume and act upon information. That artists don’t just patronize art libraries, but rather voraciously consume all available material about a subject they are depicting, seeking to understand it not just in its physical dimension required for depiction but its deeper meaning and motivation and societal impact. While little surprise to art majors, such insights into how different disciplines seek out and utilize information offers powerful insights into how we can design better information platforms.
A deeper understanding of information behavior can help platforms design systems that are more resistant to the spread of digital falsehoods and avoid common pitfalls.
A few classes in “use and users of information” and a primer in reference librarian training could have helped social media platforms avoid the common pitfalls of the backfire effect in their “fake news” efforts and perhaps even avoided the idea of mob rule virality-based algorithmic prioritization in the first place.
An understanding of the global evolution of how societies have generated, managed, consumed and utilized information throughout history and especially the ways in which societies across the world have differed in their approaches, can offer powerful guidance in the shaping of today’s informational systems. In place of the Western-centric view of information management, the interplay between information and society in other parts of the world offers myriad lessons for how to combat the spread of digital falsehoods, foreign influence and violence-inducing purposeful manipulation today. The ways past societies addressed information scarcity and the evolution of the gatekeeping model also has much to teach platforms struggling with how to moderate their informational free-for-alls. The difference between evidence and interpretation, expertise and experience, information and knowledge all have much to contribute.
Cataloging theory could help today’s AI researchers contemplate how to build their taxonomical classifiers, while abstracters and reference librarians could impart their immense wisdom and experience on tomorrow’s digital assistants, smart speakers and Q&A systems.
Yet LIS curriculums are about far more than managing archives of physical artifacts and electronic subscriptions. Community engagement has long been a major emphasis, with disciplines like “community informatics” emphasizing how information and communications technologies can empower and strengthen communities. In a digital world in which “worth” is typically defined by “advertiser interest” there is much the major internet platforms could learn from a broader thinking of how their tools empower or repress community and the meaningful changes they could make to better support marginalized and vulnerable communities.
Indeed, much of the harm wrought by social platforms on the vulnerable communities of the world, their contributions to ethnic violence, genocide, hate crimes and other horrors could have been considerably mitigated had the companies from the very beginning approached their designs from community-centric mindsets rather than building a system in their own image and answering each harm with today’s glib “oops our mistake but no-one could have foreseen this” responses. Community informatics researchers study these very issues and many of today’s high-profile social media failures are eerily reminiscent of the topics covered in the classes I myself took years ago.
Sadly, however, as Library and Information Science schools have undergone a wave of rebrandings over the past decade into “iSchools,” this emphasis on data minimization and privacy, use and users of information, community informatics, civil liberties and the human dimension of informational creation and consumption has been steadily eroded in favor of the same harvesting, hoarding, mining and manipulation that were once the exclusive domain of computer science programs.
As LIS schools boost their hiring of computer science graduates, this transition is accelerating. At some schools, LIS scholarship traditions have been relegated to specialty tracks, with core programs looking almost indistinguishable from “light” computer science curriculums.
In what would have been unthinkable during my own tenure, LIS job candidates lured from computer science are increasingly dismissing privacy, societal harm, ethical review and community engagement in favor of data-driven understanding at all costs. One particularly striking LIS job talk I attended featured a computer scientist who proudly advertised how they had bulk harvested vast swaths of major social media sites and subscription services and was redistributing them freely to researchers all across the world in direct violation of legal agreements, excitedly detailed their work on unmasking vulnerable communities, touted their years of work advancing governmental foreign influence campaigns, dismissed the utility and necessity of IRB ethical review and presented a vision for working closely with governments and Silicon Valley companies to leverage LIS approaches to building the ultimate surveillance machine. Rather than being booed from the room, this individual was enthusiastically embraced and was not only hired, but became a research director.
Sadly, as Library and Information Science schools pivot into iSchools and hire waves of computer scientists, the scholarly traditions of the community-centric human focus of LIS are giving way to the data-driven technical focus of computer science.
In the end, there is much computer scientists can learn from the library and information science community. If they hurry, they might just be able to learn some of it before it all gives way to the data-driven wave crashing across academia.
|
d1b987c34b092db332ab42b462a718d1 | https://www.forbes.com/sites/kalevleetaru/2019/08/06/social-media-companies-remind-us-it-is-still-hard-to-replace-humans-with-ai/ | Social Media Companies Remind Us It Is Still Hard To Replace Humans With AI | Social Media Companies Remind Us It Is Still Hard To Replace Humans With AI
Companies have rushed to embrace deep learning’s potential in their efforts to automate their enterprises, often with an eye towards replacing as much of their human workforce as possible or to scale their operations without expanding their hiring. An endless stream of success stories tout AI’s success in replacing an ever-growing array of traditionally automation-resistant jobs, while developers are hard at work finding ways to replace the rest of them. Yet social media platforms give pause to this idea that deep learning is quite at the inflection point of causing a wave of job displacement.
Silicon Valley has been at the vanguard of the AI revolution, pioneering the developments that have pushed the field forward. At the same time, those companies have also been hiring vast armies of human workers to augment the limitations of those very same algorithms.
Facebook represents this contradiction. While publicly touting the company’s AI-driven future and investing heavily in building a world-class AI research staff, the company is also rapidly hiring human content moderators. Even as the company increasingly deploys AI algorithms to moderate speech on its platform, it has more than 15,000 community operations staff and growing.
That a company like Facebook that has bet its future on AI and heavily touts its increasing use of AI across its platform is also rapidly increasing the hiring of humans to work alongside those algorithms reminds us that the AI revolution is still in its infancy.
Much like an infant’s simplest movements spark wonderment in new parents, so too do the most basic of AI accomplishments give rise to an image of intelligent machines just around the corner. The simple fact is that while impressive compared to past algorithmic approaches, today’s most advanced AI systems remain primitive compared to even the youngest human child.
Our excitement about deep learning comes from comparing it against past machine solutions rather than seeing it in the context of the humans it seeks to displace. An algorithm that can recognize an image of a dog after being fed hundreds of thousands of diverse training examples is certainly a major accomplishment compared to the state of machine vision 20 years ago. On the other hand, compared to the typical toddler who can learn to recognize the concept of a dog after a handful of encounters, the machine’s accomplishments are far less exciting.
Content understanding showcases both deep learning’s greatest strengths and its greatest weaknesses. Today’s algorithms are light years ahead of where they were just a decade ago, yet at the same time, they struggle immensely to cope with the complexities of human discourse. Machines still see imagery largely through the lens of metadata subject tags applied through classifiers, while textual posts are understood through similar classification or simple embeddings.
Machines have yet to approach anywhere near the depth of contextualization, reasoning skills and understanding of the youngest toddler or even the family dog.
For all its impressive feats, mechanized content understanding remains simplistic and brittle.
Tuned properly, such algorithms can help social media companies triage the global deluge of real-time posts, but humans are still frequently needed to help lend context to whether a given image represents permitted clinical reporting or prohibited glorification.
In the end, perhaps the greatest lesson is that as Silicon Valley rushes us forward towards our AI future, even it is hiring armies of humans to work alongside those algorithms. For the time being at least, our automated future will look a lot less like robots replacing humans and a lot more like symbiotic workforces blending the best of both our worlds.
|
0eb2b55fd1aa339d1b61ac231d7bc4b8 | https://www.forbes.com/sites/kalevleetaru/2019/08/20/could-public-reference-librarians-help-us-combat-digital-falsehoods/ | Could Public Reference Librarians Help Us Combat Digital Falsehoods? | Could Public Reference Librarians Help Us Combat Digital Falsehoods?
As society struggles with how best to combat the spread of digital falsehoods in the form of misinformation, disinformation, “fake news” and foreign influence, much of the emphasis to date has been on national-scale professional fact checking Websites. While these efforts have helped shed light on the most viral of online rumors, their small staffs and national focus limits their impact. At the same time, all across the United States there are public libraries serving their local communities that have reference librarians that specialize in helping their patrons navigate today’s informational deluge, doing everything from helping them locate relevant reputable information to performing basic research yielding evidence-based answers reflecting the best available scholarly and scientific information. In 2017 alone these librarians answered more than 240 million queries. Could the answer to today’s deluge of digital falsehoods lie with our nation’s public libraries and their reference librarians that have served us for more than a century?
To those generations born in the digital age, libraries are often dismissed as outdated museums to a past era, warehouses that rent physical books and DVDs. In reality, libraries are not about artifacts, they are about people. Libraries have long served as central pillars of their communities, turned to for entertainment and enlightenment and whose staff were local to their communities and understood the unique local needs of their patrons.
It is a curious artifact of the modern age that we as a society have these incredible personalized resources in our local communities all across the nation staffed by our next-door neighbors who know us by name, yet increasingly when we are in need, we place our trust in total strangers halfway across the world. Why is it that we are more comfortable today trusting a random Website operated by a scammer in a foreign country trying to mislead us for profit rather than our own neighbors who are trained professionals in our own backyard whose job it is to help us?
Why is it that local community has given way to impersonal globalization and what might happen if we return to the community-based public libraries that helped build our nation?
Most importantly, could public libraries and their reference librarians open a new front in the war against digital falsehoods? Instead of relying exclusively on a small group of centralized fact checking sites that focus primarily on national-scale stories, what if communities turned to their public libraries to confirm or debunk the stories that matter the most to them? While knowing that a satirical story about Bigfoot being sighted in a national forest is false might be important to a national fact checking site, to a local community a far more important question might revolve around a rumor that a new state law just went into effect that will close the local fire department or whether the local superstore is really holding a 50% off sale next weekend or whether a proposed Congressional bill would really outlaw health insurance. These are the kinds of stories that have tremendous impact at the local level across the nation yet are ill-suited for today’s small national fact checking staffs.
What if libraries promoted themselves as local community fact checkers, where library patrons can forward their most pressing rumors to be confirmed or debunked? Where instead of merely offering a true or false rating, a helpful reference librarian is happy to walk the patron through the available evidence and why they reached the conclusion they did? Most importantly, for questions that don’t have singular answers, reference librarians can help guide patrons to reputable information from all sides of a debate to allow them to make their own decisions.
Even for simple questions that can be readily answered online, reference librarians can help their patrons understand how to conduct online research, from assessing the reliability of a particular resource to triangulating across conflicting sources. In essence, reference librarians are the guides that can help our nation’s citizenry more effectively and safely make use of the informational riches of the digital world while avoiding the digital falsehoods and foreign influence that have increasingly corrupted it.
The Web’s ease of use belies the ease with which it can sidetrack users with falsehoods, yet the average Web user typically wrongly believes they are able to easily spot false information. This is an area where reference librarians have so much to offer their patrons, but it can be hard to convince users why they should turn to information professionals to expand their own information literacy.
While many public libraries offer some form of information literacy training for their patrons, increasing public awareness of the dangers of digital falsehoods, fraud and foreign influence and growing demand among Web users for help in separating fact from fiction presents an ideal moment for libraries of all sizes to step forward and play a central role in making their communities more information literate.
Instead of focusing all of our efforts on limited centralized fact checking efforts and automated algorithmic “fake news” filters, we should be teaching society information literacy, providing them the tools to navigate the digital world on their own and offering them the help of knowledgeable local reference librarians to guide them on their way.
In the end, perhaps the answer to the deluge of digital falsehoods, fraud and foreign influence lies in a return to our nation’s public libraries and their reference librarians that in 2017 alone answered more than a quarter-billion questions for their local communities.
|
934e84884b0928885ec379adc6212811 | https://www.forbes.com/sites/kamranrosen/2019/06/25/where-the-top-democratic-primary-candidates-call-home/ | Where The Top Democratic Primary Candidates Call Home | Where The Top Democratic Primary Candidates Call Home
From Joe Biden’s 6,850-square-foot mansion overlooking a pond in Wilmington, Delaware, to Mayor Peter Buttigieg’s twice-mortgaged fixer-upper in South Bend, Indiana, the homes of the Democratic nominees can tell us a lot about them.
With a record 24 candidates headed into this week’s debates, every extra bit of information to help understand and differentiate these individuals is worth noting.
To understand more about where the candidates call home, I evaluated and ranked the residences of ten top Democratic candidates, based on current aggregate polling figures. Ownership was confirmed using property records filed with local city or county assessors' offices. Home price estimates are from Zillow and homes are ranked by their estimated value.
1. Senator Kamala Harris
Hometown: Los Angeles, California
Estimated Home Value: $4.8 million
Kamala Harris's luxury home in Brentwood, Los Angeles Google Maps
Smashing into the top spot is Senator Kamala Harris, who lives in a four-bedroom, 3,505-square-foot spread in the Brentwood neighborhood of Los Angeles. An aerial view of the posh neighborhood reveals a pool in nearly every yard and even an outdoor basketball court at her neighbor's house. Senator Harris lives with her husband – intellectual property lawyer, Doug Emhoff – who in addition to a trust bearing both their initials, is the official owner of the house. The couple reported $1.9 million in joint earnings last year.
2. Senator Elizabeth Warren
Hometown: Cambridge, Massachusetts
Estimated Home Value: $3.6 million
Elizabeth Warren's Cambridge residence. Google Maps
The price of Elizabeth Warren’s home has probably received more attention than any other candidate this year, with several publications criticizing her for railing against income inequality while owning a home they claim was worth as much as $5 million.
MORE FOR YOUJimmy Fallon Puts His Manhattan Home On The Market For $15 MillionManhattan Real Estate: Sellers, Price Right Or Prepare To Linger On MarketRhode Island Company Turns Shipping Containers Into Accessory Dwelling Units
While I won’t touch the critiques, it’s worth noting that the official assessed value is decidedly lower, and Zillow estimates the home is worth $3.6 million. However the three-story Victorian is a sizable 3,728 square feet, and is a 15-minute walk from Harvard’s campus. Bought for $447,000 in 1995, Warren's house has proven to be a fantastic real estate investment, appreciating by a whopping 705% (compared to a median home value growth of 215% in greater Boston during that same time period).
3. Mayor Bill de Blasio
Hometown: Brooklyn, New York
Estimated Home Value: $2.17 million
Mayor Bill de Blasio's Park Slope residence in New York City (featured center). picture alliance via Getty Images
While Bill De Blasio currently spends his time in the luxe digs of Gracie Mansion, his longtime home in Park Slope, Brooklyn is already well known – as the current New York City Mayor made headlines when he put it up for rent for $5,000 per month back in 2014. The 1,248-square-foot home is modest in size compared to others on this list, though still quite sizable for living in New York City. Purchased in 2000 for $450,000, Bill De Blasio can take comfort in knowing even if he doesn’t land a new residence at 1600 Pennsylvania Avenue, his old house will do just fine.
4. Former Vice President Joe Biden
Hometown: Wilmington, Delaware
Estimated Home Value: $1.87 million
Aerial view of Joe Biden's Wilmington home. Google Maps
With 4.1 acres of land and 6,850 square feet overlooking a pond, it’s safe to say former Vice President Biden is living large. His house is shielded from the road via a ⅛ mile long driveway and contains a pool in the backyard. Interestingly, despite its size, the house has only three bedrooms and four bathrooms, meaning it’s possible Joe Biden and his wife Jill Biden may do more hosting than living in their luxury Delaware home.
5. Senator Cory Booker
Hometown: Newark, New Jersey
Estimated Home Value: $435,377
Corey Booker's home (featured middle) in Newark, New Jersey. Google Maps
When it comes to Senator Cory Booker’s Newark home, one detail appears to supersede all others: Does he actually live there? While the senator is on record as having lived in his current Newark address since 2013 (located in one of Newark’s “toughest neighborhoods,” according to his communications director at the time), opponents and even some locals have claimed he’s seldom seen there (according to a 2013 piece by Buzzfeed). Senator Booker's campaign maintains he is regularly at his Newark home, where he primarily resides. The 2,818-square-foot home has benefited greatly from the housing rebound— up an estimated 154% since Booker bought it in 2011.
This story has been updated to reflect comments by Senator Booker's campaign.
6. Senator Bernie Sanders
Hometown: Burlington, Vermont
Estimated Home Value: $406,129
Bernie Sanders's home in Burlington, Vermont. City of Burlington Assessor's Office
Much like his high-profile progressive colleague Elizabeth Warren, Senator Bernie Sanders has received criticism over the value of his home—or more specifically, homes. While the senator has owned his 2,353-square-foot, four-bedroom home in Burlington since 2009, he has been known to split his time between two other homes: one in Washington D.C. and the other, a lakefront property in nearby North Hero, Vermont. While $406,000 isn’t quite what everyone might consider lavish wealth (it’s about 27% higher than the median home price in Burlington), it appears the senator does well with his real estate.
7. Beto O’Rourke
Hometown: El Paso, Texas
Estimated Home Value: $397,893
The "Williams House", where Beto O'Rourke currently resides in El Paso, Texas. Google Maps
Dubbed the “Williams House” after noted banker Joseph Williams, Beto O’Rourke’s pueblo-style home in the Sunset Heights neighborhood of El Paso is a character in itself. Announcing his presidential candidacy from an interview inside his home, O’Rourke was quick to boast of his home’s history as a meeting place between Mexican revolutionary Pancho Villa and U.S. General Hugh Scott back in 1915. The stucco-walled, 4,656-square-foot home features five bedrooms, a wrap-around porch and sizable patio area.
8. Senator Amy Klobuchar
Hometown: Minneapolis, Minnesota
Estimated Home Value: $352,387
Amy Klobuchar's Minneapolis home. Google Maps
Amy Klobuchar is about as Minnesotan as they come. Born in Plymouth, Minnesota, she took a brief detour to Yale before returning to her home state to work as a private lawyer, and later became Minnesota’s first female senator. Her 2,205-square-foot home is located just across the river from downtown Minneapolis, in the Marcy Holmes neighborhood. At $352,387, Senator Klobuchar’s home is comfortably above the estimated Zillow median home value of $266,300 for the city.
9. Senator Kirsten Gillibrand
Hometown: Brunswick, New York
Estimated Home Value: $312,260
Kirsten Gillibrand's Brunswick home. Google Maps
With five bedrooms, 3,436 square feet and a 2.65-acre lot for just over $300,000, Senator Kirsten Gillibrand of New York has the best home size per dollar on this list. Benefiting from the relatively low home prices of the Albany Metro area (median home price there was $217,600, according to Zillow), Senator Gillibrand’s home is conveniently just a 20-minute drive from the state’s capital, where her offices are located. Purchased in 2011 for $335,000, Gillibrand’s home is the only one on this list which has decreased in value since purchase, likely due to the net migration out of New York state.
10. Mayor Pete Buttigieg
Hometown: South Bend, Indiana
Estimated Home Value: $242,515
Mayor Pete Buttigieg's South Bend home. Google Maps
Rounding out the list is Pete Buttigieg, the mayor of South Bend, Indiana, who when returning from the campaign trail, is welcomed into a 2,480-square-foot house just across the street from the St. Joseph River. Residing there with his husband and two dogs, Mayor Pete has shown some real estate savvy: his house has almost doubled in value since he bought the home for $125,000 in 2009 (he has done extensive renovation).
|
b78ee0f2a0709ce33d2a8f6e56e7c62e | https://www.forbes.com/sites/kamranrosen/2019/06/30/europe-completes-its-first-ever-blockchain-real-estate-sale-for-65-million/?sh=10766ce15a89 | Europe Completes Its First Ever Blockchain Real Estate Sale for €6.5 Million | Europe Completes Its First Ever Blockchain Real Estate Sale for €6.5 Million
A photo illustration of the digital Cryptocurrency, Ethereum. (Photo Illustration by Yu Chun ... [+] Christopher Wong/S3studio/Getty Images) 2018 S3studio
Last week, the AnnA Villa in Paris made history by becoming the first ever European property to be sold entirely via blockchain transaction.
The luxury building, located in the city’s Boulogne-Billancourt district, was valued at €6.5 million, and was sold to French real estate companies, Sapeb Immobilier and Valorcim. The process involved first transferring ownership of the building to a joint-stock company (SAPEB AnnA), then dividing the company into 100 tokens to be distributed to the owners respectively. Each token can be further broken down into 100,000 units, meaning individual shares of the building can be bought and sold for as little as €6.50.
The deal – which was managed by French blockchain investment platform, Equisafe – was powered on the Ethereum token, and was the latest of several worldwide efforts to bring real estate sales onto blockchain technology. Last year, a $30 million Manhattan property was also tokenized on Ethereum, and in January of this year, a luxury resort in Aspen, Colorado raised $18 million through a security token offering.
Real estate has often been touted as an industry ripe for tokenization, as its low liquidity and high barrier to entry deter many potential investors. Breaking real estate into fractional ownership would allow the general public easy access to small shares, enabling property to be traded similar to other exchange-based securities.
A search for real estate tokens reveals dozens that are already trading on secondary markets. For instance, digital equities group, Elevated Returns, plans to tokenize $100m of real estate in Thailand this year, while UK-based BRIKCOIN hopes to use blockchain technology to build affordable housing.
MORE FOR YOUMortgage Interest Rates Rise SharplyRising Interest Rates Could Derail Strong Demand For Mortgage ApplicationsCountry Music Star George Strait Lowers The Price On His San Antonio Home
Beyond the advantages of liquidity, tokenizing the real estate process provides many other advantages over a traditionally antiquated process. For instance, in the sale of the AnnA Villa , much of the cumbersome legal documentation involved with selling property (such as notarized deeds and proof of identity), was all encrypted and recorded on the blockchain. If scaled effectively, the time and cost saved from not having to manually verify this information could prove to be enormous. Equisafe is going as far as to claim individuals will be able to create investor profiles and access offers in less than half an hour.
While the real estate industry is still very new to blockchain, it appears there is sustained interest in what the technology could provide moving forward. With precious few experiments to glean insight from, it’s likely this deal will have many eyes observing the outcome.
|
c39080bb8d210275184f0b8f22fc86df | https://www.forbes.com/sites/kamranrosen/2019/06/30/you-can-buy-the-private-island-from-fyre-festival-for-118-million/ | You Can Buy The Private Island From Fyre Festival For $11.8 Million | You Can Buy The Private Island From Fyre Festival For $11.8 Million
An aerial shot of the island made famous in the Fyre Festival announcement video. HG Christie Limited
Unless you don’t engage in any social media or streaming services whatsoever, you’ve likely heard of the colossal failure that was Fyre Festival.
Promoting a video featuring A-list models partying on a private Caribbean island (touted as having been owned by Pablo Escobar), Fyre festival founder Billy McFarland and partner Ja Rule sold thousands of tickets on the promise of an island paradise “on the boundaries of impossible.”
The opening of the festival’s now-famous video announcement reveals an aerial shot of a pristine island known as Saddleback Cay – one of 365 such islands located in the Bahamas chain known as the Exumas. Spanning 35 acres, with a protected bay and 7 beaches, the private island is almost cartoonishly perfect, having all the features one imagines in a Caribbean getaway.
While the festival turned out to be a total fraud, the island which made it famous most certainly is not – currently for sale for $11.8 million dollars by Bahamas realtor HG Christie.
It is worth noting that while Saddleback Cay is indeed the island featured in the marketing promo for the festival, it is neither the island famous for being part of Pablo Escobar’s smuggling route (this is nearby Norman’s Cay), nor is it the final location of the festival itself (this is Great Exuma).
MORE FOR YOUMortgage Interest Rates Rise SharplyTen Living Room Mistakes Interior Designers Always NoticeCharleston’s Daniel Island: Expansive Yet Intimate Southern Living
The largest beach of Saddleback Cay. HG Christie Limited
Despite the mountain of bad press for the festival, Saddleback Cay has actually garnered a lot of interest since the dual releases of the Fyre and Fyre Fraud documentaries.
“I actually had a buyer the next day after it aired”, says listing agent John Christie. “They came down to check it out but it ended up not being right for them.”
Indeed, while the island is undoubtedly beautiful, owning a private island might not be “right” for many. Although the island does have a 500-square-foot cottage and 2 bathrooms, the property is primarily for sale for the land, and would likely require development from any potential owners.
A small lookout tower over one of the beaches in Saddleback Cay. HG Christie Limited
Located in the northernmost section of the Exuma Cays, the island does have easy access by boat or by air from New Providence (the most populous island of the Bahamas), and nearby Normans' Cay has an operational airstrip.
Despite the good access, it appears at least part of the reason the island is still on the market is technical complications. The current owners have the property wrapped into a company, and buying the island requires buying of the company as well.
As for whether the debacles of convicted-fraudster Bill McFarland will hurt the value of the island, Christie doesn’t seem concerned:
“No such thing as bad publicity, right?”
|
699834eefa691e94ae0bab4a6fa249fe | https://www.forbes.com/sites/kamranrosen/2020/02/20/what-you-can-learn-about-selling-homes-from-one-of-americas-top-celebrity-realtors/?sh=3e6b594b79c4 | What You Can Learn About Selling Homes From One Of America’s Top Celebrity Realtors | What You Can Learn About Selling Homes From One Of America’s Top Celebrity Realtors
He’s been ranked Los Angeles’s top realtor—and with over $6.5 billion in residential real estate sales—it’s hard to argue with his track record.
Aaron Kirman, the current President of the Estates Division at Compass, and star of the show Listing Impossible, has made a name for himself over the last two decades selling some of the most luxurious houses in the country—from Frank Lloyd Wright’s Ennis House, to the $65 million Danny Thomas Estate.
So what’s his secret to success? Turns out there’s no one magic bullet—and ultimately, many of the techniques used to sell to the world’s ultra-rich are not that different from your everyday realtor.
Sitting down with Aaron for two hours, I had the chance to pick his brain on what current home buyers buyers want, and how agents and home sellers can use this information to maximize their value on the real estate market.
Selling like a millionaire, I learned, came down to four main principles.
1. Sell The Lifestyle Not The Home
Painting a story of a house is as important as its dimensions. Getty
MORE FOR YOUMortgage Interest Rates Rise SharplyTen Living Room Mistakes Interior Designers Always NoticeCharleston’s Daniel Island: Expansive Yet Intimate Southern Living
If you’re focusing on the number of bedrooms and square feet—you’re doing it wrong.
"Storytelling is the best way to sell property," Aaron quickly tells me. “If you can tell a story of that house—the wine cellar, the room, the story in the background—then the house becomes for sale...it's a much better story for buyers and they're much more interested.”
Indeed, the idea of homes as a “lifestyle-first” purchase is backed up by data. The December 2019 issue of the Luxury Market Report found that a key trend in luxury real estate was not size or location, but finding a more “holistic approach to living.” The report found that “[s]avvy homeowners, luxury real estate professionals, developers, architects, and designers are all reporting a significant importance being placed on the infusion of a healthier cadence into our everyday working and living environments.”
In other words, buyers don’t just want to just see the house they can live in—they want to imagine the life they can live.
“I call myself a lifestyle curator today I swear,” Aaron tells me. “I have to curate a house to make sure it's right for the buyer’s lifestyle. I didn't always have to do that and it adds about 50 percent more work to my job every day. But whenever I do it, I get a high number for a house. And when I don't do it, they get a low number.”
Aaron isn’t alone in seeing the addition of curation to his typical workload. Home staging has exploded over the last decade to the point where virtually all listings above a certain price point are now typically staged. A report by the National Association of Realtors (NAR) found that roughly half of realtors surveyed found staging to increase sales price by between 1 and 10%. The same study found that 83% of buyer’s agents said that staging “made it easier to visualize the property as a future home.”
What lifestyle to convey will differ by client, neighborhood and home, but it’s important to stay on top of trends. For instance, the Luxury Market Report also found that “big, bold plants” such as rooftop gardens, as well as natural woods and terra cotta colors all play a large role in conveying a wellness-based lifestyle to buyers.
Aaron is notorious for emphasizing proper staging and lifestyle framing—sometimes bluntly. In the season premiere of Listing Impossible, he didn’t hold back on one of his clients, exclaiming the black satin furniture in one of her bedrooms resembled that of a “sex dungeon.”
Needless to stay, the home received an interior makeover.
2. Get Your Brand Out There By Any Means Necessary
The world has migrated online, and selling real estate effectively means a having strong online ... [+] presence. Getty
When it comes to selling real estate, the name of the game is exposure.
While in the past this previously meant oversized billboards and magazine spreads, the industry has evolved to include YouTube, social media, blogs, search engines—and for top realtors like Aaron—television.
“We have the ability to bring our product to buyers on a national, global and localized scale,” he tells me. “And if we're doing our job right, we're getting to people before they're getting to us.”
The idea of online presence becoming the primary funnel for realtors also appears to be well backed up. A survey by NAR on real estate in the digital era found that the most common first action taken by prospective home buyers was searching for a home online—more than twice as popular an action as reaching out to an agent directly. The same study found that social media was considered the best tool for generating “high quality leads” by realtors, surpassing even the MLS itself. This trend may only increase over time: buyers under age 28 were twice more likely to contact agents through social media than any other cohort.
However, the nature of this new media landscape means that the effects of messaging can often be indirect.
“I've had people from Hong Kong—recently from Shanghai—people from Saudi Arabia, reach out to me just based on a story and be like, ‘I really like the way that was curated, what do you have for me?”
Aaron’s self proclaimed specialty is currently on Instagram, where he posts photos of extravagant homes and celebrities to his 287 thousand followers—garnering upwards of two thousand likes per post. However, he appears to be a minority in the real estate industry, where Facebook dominates among realtors who use social media at 97%, compared to just 39% for Instagram.
But it’s not catching up on Instagram that Aaron recommends, but rather being in tune with whatever channel is reaching buyers. When asked about the best channel to reach buyers, he was hesitant to answer.
“If I have to pick one today it’s Instagram,” he finally answers. “With the right to reserve next year...let me see how I do on my YouTube channel.”
3. Don’t Confuse Your Passion with Your Asset
Don't sink money into a project before researching it. Getty
For many individuals, a home is the largest asset they’ll ever own. However, unlike most assets, one’s home is also highly personal—often modified to personal preference and taste.
Yet if you are viewing your home as an investment, Aaron warns, then certain realities need to be remembered when modifying it.
“Evaluate: are there buyers that are going to want the changes? And evaluate what the house is worth versus what you're going to spend and make sure that there's profit margin in there. Because if there's not, you're running on losing time, energy and money.”
For instance, one of Aaron’s clients built a beautiful beachfront house by the ocean. The only problem? They built their house facing backwards, thus eliminating any view of the ocean.
Another client had a beautifully designed home, with one major overlooked detail:
“There was like a great modern house, which should have worked really beautifully, except that she didn't think about the privacy. And so she had all these walls of glass, and the neighbors staring in on her!”
Aaron himself knows how difficult it can be to have foresight on a passion project.
“I over designed a house,” he admits. “I had to change structure, which I wasn't expecting. The structure was super expensive. Once I had to change structure, I had to basically tear it down.”
Avoiding modification pitfalls above all comes down to research. Looking into current real estate trends reveals that open-concept kitchens, granite and stone countertops, and smart home features like video doorbells are very popular. In the luxury market, popular currently features include wellness spaces like meditation and yoga rooms. Knowing if the addition you want is in line with the market, and how much it will cost, will help to buffer against risk in the event you choose to sell.
In other words, when it comes to home modification: measure twice, renovate once.
4. Know The Nuances
For many properties, it's the tiniest of details that set them apart. Getty
“Where we [realtors] add and provide our value is the nuances. The nuances of the street, the nuances of a house, the nuances of a neighborhood. Those nuances can cost somebody hundreds of thousands—if not millions of dollars.”
Indeed when it comes to what buyers want from realtors, the National Association of Realtors found that helping to find the right home was the most wanted quality— higher than handling paperwork and home price negotiation.
While expertise may seem an obviously desirable trait for realtors, knowing how specific that expertise should be is sometimes difficult. For instance, when evaluating what neighborhoods to live in, 27% of homebuyers under age 28 considered convenient access to vets and outdoor pet spaces to be an important factor. Many buyers will have even more specific requests.
A truly good agent’s expertise should also go beyond knowledge of the present.
“For me when I say nuance, it’s not just about what’s happening at that moment in time,” Aaron says.
Speaking on a rapid overdevelopment of the Hollywood Hills in the last 5 years, Aaron expresses how he foresaw a bubble forming, and warned homebuyers.
“I feel like I was able to save my clients that were buying that property. I stopped them from buying dirt in the Hollywood Hills two and a half, two years ago”
Ultimately, whether it’s a $50,000—or $50 million dollar house—the details remain quintessentially important. Knowing the market, the media landscape and how to communicate these to sellers is invaluable knowledge for any realtor, and the key is learning how to tailor these concepts specifically to your situation.
Once that’s mastered that, there’s no ceiling to one’s real estate selling potential. As Aaron to put it, “It is an interesting and fun game.”
|
cba7334abff9120bb344954ce187fc73 | https://www.forbes.com/sites/karadennison/2021/01/25/4-strategies-to-become-an-inclusive-leader-and-foster-a-diverse-culture-of-high-achievers/ | 4 Strategies To Become An Inclusive Leader And Foster A Diverse Culture Of High-Achievers | 4 Strategies To Become An Inclusive Leader And Foster A Diverse Culture Of High-Achievers
Businessman explaining while male and female colleagues sitting in conference room seen through ... [+] glass wall at workplace getty
Employees want to feel represented. They want to be a part of a company that understands and respects their race, gender, and beliefs. To foster effective team-building and create a growth-focused culture, companies should incorporate inclusive leadership values.
Diversity creates an opportunity for progressive companies and their employees. Inclusion communicates company values and builds a team that becomes invested in the growth of a company.
Leaders that take on the personal responsibility of embracing diversity and inclusion build trust, earn respect, and cement buy-in from those they lead.
The statistics paint a picture.
There were three hundred forty-six companies surveyed in a 2015 McKinsey & Company research study. The study says, "the average gender representation increased on their executive teams only two percentage points, to 14%, and ethnic and cultural diversity by one percentage point, to 13%.
What's more, many companies are still uncertain as to how they can most effectively use diversity and inclusion to support their growth and value creation goals." We have a long way to go.
Inclusive leaders empower people, capitalize on each employee's strengths, and leverage growth through inclusive principles and values.
In a 2018 study of over 1,700 companies, organizations with a diverse leadership team had 19% higher revenue on average than companies with less diverse leaders, according to Boston Consulting Group.
MORE FOR YOUHow To Develop Courageous And Compassionate LeadershipInflation Has Not Shown Up Yet, But It’s ComingHow Starlink Is About To Disrupt The Telecommunications Sector
Here are four practical strategies for leaders to embrace inclusive principles and foster a diverse culture of high-performers.
1. Become self-aware of your unconscious bias.
Leaders are humans first. You grew up and learned certain beliefs. You lived and experienced life during your formative years, absorbing the views and opinions of the world around you.
Whether you realize it or not, you developed beliefs about people that are different than you. Family, friends and even your community growing up expressed their views, which affects you one way or another.
Unconscious bias is defined as "societal stereotypes about groups of people that are outside of your awareness." Your thoughts toward other races and genders may be different than what you would say out loud. Just about everyone has an unspoken bias.
To become an inclusive leader, start with self-awareness of the biases you have. Self-awareness is one of the most powerful traits of successful leaders. If you understand what makes you tick, you can see the bottlenecks before they surface, and learn how to best grow and support your team.
Self-awareness of biases allows you to start working on adjusting your beliefs and doing what's required to understand race, gender, and beliefs that may be different from yours.
Dealing with unconscious bias helps you become a better leader because you don't push those biases onto those you lead. You can also see and address any bias that's happening within your team.
2. Ask questions and embrace self-education.
You don't know what you don't know. The best way to understand what you don't know is to ask questions and educate yourself. Leaders are the first to self-educate, and especially on the topics of diversity and inclusion.
When you ask questions, you uncover biases and start to understand experiences that are different than yours.
One of the main things you'll hear African Americans and women say is that it's exhausting having to educate someone before they can have a real conversation about diversity and bias.
There are great books, videos, podcast episodes, documentaries, and training programs that can continue your education on diversity, inclusion, unconscious bias, and marginalized people's daily reality. Do your research and embrace the growth.
3. Root out microaggressions.
When you think about creating a high-achiever culture, this team is led by a leader who actively works to address and deal with microaggressions.
Microaggression is a term used to describe behavioral or environmental injustices, whether intentional or unintentional, that subtly communicate derogatory or negative attitudes toward stigmatized or culturally marginalized groups.
Microaggressions stem from bias and can destroy trust, develop a bitter culture, and give fuel to racial and gender stereotypes. It can create an entitled attitude or breed resentment that threatens the growth of a company.
Becoming an inclusive leader means you identify microaggressions in yourself and your team. You address them and put policies in place to make it known that they're not acceptable.
The intention is not good enough — creating an inclusive culture requires affirmative action and a commitment to make everyone feel safe.
4. Understand that everything matters.
The way you communicate as a leader speaks volumes. What you accept as a leader tells your employees everything they need to know. Being intentional and leading by example is how you'll develop more buy-in from your team.
Everything matters when it comes to diversity and inclusion in the workplace, and it starts with leadership. One offhand joke can have a ripple effect. One instance of letting bias shine through can chip away at your company culture. Tolerating microaggressions can cause good employees to leave.
Diversity and inclusion efforts and enforcement can't be taken lightly in any way if you're serious about becoming an inclusive leader. Gender diversity can't be an item on the training to-do list. Inclusive leaders make diversity a priority.
The statistics tell us that "teams with inclusive leaders are 17% more likely to report that they are high performing, 20% more likely to say they make high-quality decisions, and 29% more likely to report behaving collaboratively," according to the Harvard Business Review.
Inclusive leaders help build a modern company of high-achievers and a team that's dedicated to growth.
Different groups of people bring a vast array of skills to companies — embrace them and make them feel safe and respected, and watch your team’s culture and productivity grow.
|
6c1c89a002c1396ccc40a20550fca8ca | https://www.forbes.com/sites/karagoldin/2018/01/17/why-its-important-to-make-time-to-mentor/ | Why It's Important To Make Time To Mentor | Why It's Important To Make Time To Mentor
Facebook CEO Mark Zuckerberg turned to a variety of mentors, including Steve Jobs and Don Graham... [+] (Photo by Justin Sullivan/Getty Images)
Sometimes there aren’t enough hours in the day. Whether I’m working with my team, meeting suppliers or interviewing other business leaders for The Kara Network, my week is jam-packed with things to do.
But no matter how busy I am, I always set aside five hours each week for one very important task: mentoring. Sitting down for a coffee with a young entrepreneur to hear what they’re up to and to offer them the benefit of my experience is one of the most valuable uses of my time.
Of course, there is a selfless element to mentoring. Experience is a unique resource that young entrepreneurs don’t have. But the value of mentoring is not all one-side. Here are four ways you’ll benefit from offering your support.
1. You get real insight into the next big thing
As he made the transition from coder to CEO, Mark Zuckerberg spent time with Washington Post Company CEO, Don Graham, to see how leaders behave. In return, Graham says the Facebook founder helped him better understand how to engage people online – valuable information for a newspaper business still adapting to the world of digital.
Young entrepreneurs are often working with cutting edge technology or engaging with the latest trends. As a mentor, you can get first-hand insight into fresh and exciting innovations.
2. You’re networking with the next generation of leaders
Some of today’s most successful CEOs, including Mark Zuckerberg, Google’s Larry Page and Salesforce’s Marc Benioff, credit Steve Jobs with helping them grow as leaders. The Apple founder’s mentorship was rewarded with deep connections to these influential business leaders. Benioff was so grateful, when Apple launched its App Store, he gifted Jobs the App Store trademark and URL that he had bought three years previously.
Your mentees may not achieve such high levels of success, but you’re still creating close connections with the generation that will soon dominate the world of business.
3. You can pass on your own passions
Most conversations with me inevitably come around to my passion for healthy living. If my mentee shares my enthusiasm, the result is usually another convert to my company’s product and another customer at a growing business. Don’t get me wrong, mentoring isn’t just another sales pitch. But great sales is all about building relationships.
Just as you can discover new technology or trends by listening to a young entrepreneur discuss their project, your mentee might be inspired when you share your passion with them.
4. You’ll become a better leader
Anyone who runs a business is already a mentor. The people who work for you turn to you for advice every day. But you’re often so focused on solving the problem at hand, you skip straight to the decision, forgetting to give your team the space to work things out for themselves.
Mentoring someone outside your business gives you a more dispassionate perspective. You don’t tell that person what to do. Instead you offer advice and guidance. Applying this thinking to your own business will make you a better leader and help enhance your team’s skill sets.
These four reasons are why I recommend business leaders should offer support to young business people. It doesn’t need to be an ongoing formal arrangement. I get around 30 new mentoring requests a week and pick the ones where I feel I can genuinely offer value.
However you make it work, make time for mentoring. It’s a small time commitment that can have a big impact for you and the next you.
|
313b261379b5e553ade61e8aa02a7feb | https://www.forbes.com/sites/karagoldin/2018/07/16/6-ways-to-ask-for-help-as-an-entrepreneur/ | 6 Ways To Ask For Help As An Entrepreneur | 6 Ways To Ask For Help As An Entrepreneur
Shutterstock
When Bell Labs – inventors of the transistor, the laser, and the calculator – investigated what linked its employees who held the most patents, the result was unexpected. Each of the company’s top innovators had regular lunches with a guy named Harry Nyquist, an engineer who had a gift for probing other people’s ideas with the right questions. The essential trait shared by Bell Labs’ best people was a willingness to ask for help.
Entrepreneurs can be reluctant to seek assistance. It seems like the opposite of what a self-starter should do. Investors, partners, and employees often buy into the founder more than a company’s product or service. So, entrepreneurs feel the need to project complete confidence and competence.
But you can’t be good at everything. If you want to move fast, avoid simple mistakes and fill the gaps in your knowledge base and skills, you need to put your ego aside and get support. Here are six tips for asking for help effectively.
Don’t think of help as a debt
People often worry that asking for help places them in another person’s debt, but business is built on people doing each other favors, so you should embrace the obligation.
Entrepreneurs often complain that breaking into new markets is hard because of the existing “old boy network.” Asking for help is your way into the circle or how you start your very own network.
Play ignorant
Before I started my company, I didn’t know much about the beverage industry. When people told me we’d never get the kind of shelf-life retailers needed without using preservatives, I questioned this standard wisdom because I didn’t know any better. My ignorance opened the door to a solution we still use today!
Don’t feel the need to be seen as knowledgeable. People generally don’t want to help know-it-alls, and you never know what can happen when you ask a “stupid” question.
Know what you need help with
In the past, I’ve met someone at a conference or event and only realized later that they were the right person to help with an issue. Now, I always make sure I have a mental list of specific things I need help with, that way I’m not scrambling for ideas when I meet a potential problem-solver.
Knowing exactly what you need help with also allows you to ask more precise questions. This also makes it easier for your helper to provide useful advice quickly.
Show that they’re the right person to help
Every day my inbox fills up with requests for help and advice from hopeful entrepreneurs. I try to respond to as many as I can, but if your question isn’t relevant to me and my areas of expertise, then I may just ignore it altogether.
Do your research beforehand and tell me why you think I’m the right person to help you. I’m definitely more likely to help someone who makes a compelling case for it.
Do the legwork
I love connecting people who I think could do business together or help each other out. If I’m asked for an introduction, and I think it’s the right fit, I’ll do it as long as it’s not a time-consuming task.
When you’re asking for help, make it easy for your helper. If you want an introduction, add a description of your business to your email. When I can simply forward all that information, I’m much more likely to make an intro.
Be grateful
This should go without saying. When someone helps you, thank them. If one of their suggestions went down well in a meeting, tell them. It makes your helper feel good, and they will be more likely to assist you in the future.
I also like to send people who’ve helped me a case of hint water. Not only does it show my gratitude, but it’s also an opportunity to promote my brand and product.
“You can’t be good at everything. You need help from people you respect and trust.” - Alli Webb – Founder, Drybar
Of all the benefits of asking for assistance, and my favorite reason to reach out is that it forms a bond between you and your helper. Every time you get help, you create a new advocate for your brand.
|
157c1192e4126a271c92f2d5b834f799 | https://www.forbes.com/sites/karagoldin/2018/11/06/3-reasons-why-the-benefit-of-your-product-isnt-always-its-selling-point/ | 3 Reasons Why The Benefit Of Your Product Isn't Always Its Selling Point | 3 Reasons Why The Benefit Of Your Product Isn't Always Its Selling Point
A basic principle of marketing is to sell the benefits of your product or service, not what it does. As economist Theodore Levitt put it, “People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.”
The best marketing doesn’t focus on how a product or service works, it shows a potential customer how it will help them. But sometimes the main benefit of a product or service isn’t the most successful selling point.
“Good for you” is not always enough
When my company promotes our flavored waters, we talk about the great taste. For our sunscreen, we highlight the fruit scents.
But our mission is to make healthier choices more enjoyable and this is what we’re really selling. The delightful experiences of drinking our water and applying our sunscreen are cover for the serious benefits of staying hydrated and protecting your skin from the sun.
If your product or service has benefits that may not be immediately appealing to customers, here are three ways to find a more successful approach.
1. Empower your audience
Outdoor Voices creates stylish, comfortable clothes that provide necessary support for sports and other leisure activities. But in the competitive sportswear market, these benefits are hardly unique. So, Outdoor Voices adopted a broader message to appeal to an audience that has a more casual approach to fitness than the high-performance athletes targeted by other brands.
The company asks users to post pictures of themselves #DoingThings. This can mean an exercise class or running a marathon, as well as walking the dog or going to the store, which, as CEO Tyler Haney explains, “is better than not doing things”. By celebrating even their smallest achievements, Outdoor Voices empowers its customers to continue living an active, healthier life.
Can you scale your brand benefits into a more general message that will empower your target audience?
2. Enrich their lives
When Airbnb started in 2008, travelers loved how it offered a cheap alternative to expensive hotels. That benefit is still there in the company’s name, which is a shortened version of the original, AirBed & Breakfast.
But in 2014, Airbnb launched a major rebrand, with a new tagline Belong Anywhere that focused on how its users experience new places like a local. Instead of marketing itself as a source of cheap accommodation, Airbnb is now an authentic way to see the world.
How does your product or service enrich the lives of your customers and give them experiences they wouldn’t otherwise have?
3. Make it fun
Lego is amazing. It helps children develop fine motor skills, promotes problem-solving and encourages teamwork. The little plastic bricks can even help with emotional healing.
The benefits of playing with Lego are well documented but rarely by the brand itself. Lego marketing is aimed at kids, who don’t care about developing skills, they just want cool toys. So Lego ads focus on fun.
Is there a fun side to your product or service that will help you deliver its benefits in an entertaining package?
Hiding greens in the ice-cream
If you’re selling a health or otherwise beneficial product or service, you sometimes have to treat adults like children. Most kids don’t like to eat their greens, so parents chop and pulse vegetables and hide them in their child’s favorite food.
You too can disguise your brand’s benefits in messages consumers want to hear. By selling them the empowerment, enrichment, and fun they want, they’ll also get what they need.
|
0f6a0c4eabc5652b694734975406fa01 | https://www.forbes.com/sites/karanmehrishi/2018/10/30/worlds-fastest-growing-major-economy-has-a-cash-problem/ | World's Fastest Growing Major Economy Has A Cash Problem | World's Fastest Growing Major Economy Has A Cash Problem
Despite being the world’s fastest growing major economy, India is having trouble in mustering enough ammunition to sustain its momentum. The economy it seems is running out of cash and systemic liquidity has fallen much faster than anticipated.
Given the situation, India’s central bank, the Reserve Bank of India, popularly known as the RBI has been infusing money into the system through open market operations (OMO). The instrument allows the RBI to purchase dated Government securities in the open market and infuse liquidity in return. This liquidity enters the banking system since commercial banks are the primary hoarders of such securities in India.
RBI is touted to use all its fire power to stabilize the structures and has infused $10.25 billion so far with a further $5.5 billion coming in November. More such injections are expected over the next six months to reach some kind of a systemic halcyon.
The Warning Signs
The weighted average call money rate (WACR), which is the average of all call money rates governing India’s short term liquidity demand has been very volatile lately. Often termed as India’s version of the London Interbank Offered Rate, the WACR is a proxy for assessing overnight volumes among banks. Worryingly, the rate has been consistently trending near the benchmark rate and understandably banks have been scrambling for cash. One possible explanation to this is the RBI’s exit from its long held neutral to accommodative liquidity stance, which equates to general accommodation.
As the systemic liquidity is reoriented towards the so called ‘calibrated tightening’ stance, the central bank was absorbing money as it considered it to be inflationary. This belief system has its roots firmly attached to the lingering effects of the 2016 demonetization exercise, when the Indian Government deemed high denomination currency notes illegitimate, overnight. In a bid to exchange old notes with the new, banks were oversupplied with deposits – taking systemic liquidity to record levels. The event resulted in the WACR/ benchmark rate differential to peak 32 bps in April 2017; astounding because this number rarely goes beyond 15.
What worried the central bank was the fact that cash rich commercial banks increased their credit offtake or lending operations – aiding risky businesses. This was causing inflationary pressures as well because heightened personal loans were aiding pressures in core inflation. Since then, the RBI has been pulling money out by exercising two options, first via the Reverse Repo operations under the Liquidity Adjustment Facility and second through the mentioned OMOs.
Turmoil in the capital markets and a threatening asset liability mismatch
As interest rates started to rise in anticipation of a higher inflation and the rapidly normalizing interest rates in the United States - capital markets suddenly became less attractive. As suggested by falling equity valuations and bond yields going up, investors were in a panic mode. Since the start of the 2019 financial year, foreign portfolio investors have pulled out nearly $13 billion from the Indian capital markets. The US India benchmark yield differential is also now below its traditional buffer of 500 bps, thanks to a quicker Fed rate hike cycle and the RBI’s inherent reluctance in raising rates enough on account of domestic stability. Capital market debt issuances as a result have declined by 30% since the start of this financial year and duration risk is making issuers nervous about the future. With that said, corporations are turning towards the old-fashioned ways of financing – banks.
However, RBI absorptions as part of its normalization exercise reduced cash in the system and the heightened demand for credit is now making life difficult for businesspeople. Worsening the situation, growth in deposits is slower than that of credit in the Indian banking system and that is creating an adverse credit to deposit ratio. Commercial banks are now anxious about a potential ‘asset liability mismatch’ and becoming extra cautious; this is further starving the system of cash. Such has been the impact of this crunch that the WACR/ benchmark rate spread reached parity in September this year before eventually turning negative. The insecurities in the overnight lending among banks were therefore apparent and started to impact even the short term money market instrument valuations.
Old Wine; New Bottle
Historically, India has been an illiquid market. While such a situation prevails in most emerging markets, India’s case is special primarily because of its domestic demand orientation. Considering its current rate of economic expansion, the Indian economy cannot sustain for long until the liquidity situation is sorted. The impending demand for cash as the holiday season begins will worsen the situation. In its final attempt, the RBI is selling parts of its massive foreign currency reserves to not only stabilize the rupee but infusion of exchanged rupee back into the system in the form of cash. Doing this, the central bank has already sold $32 billion of these reserves and might consider this again if instability continues. As things stand today, the liquidity situation is grim and requires a serious rethinking. The central bank is expected to take evasive steps to calm the system – the costs may however be underestimated at the moment.
|
4e1e03460a1a26ecb735738de1bc9061 | https://www.forbes.com/sites/karanmehrishi/2018/10/31/asias-most-dominant-economy-is-slowing-down-the-entire-continent/ | Asia's Most Dominant Economy Is Slowing Down The Entire Continent | Asia's Most Dominant Economy Is Slowing Down The Entire Continent
Chinese flags at the finish line of the fourth stage, 152.2km from Nanning to Nongla Scenic Area, of... [+] the 2nd Cycling Tour de Guangxi 2018. (Photo by Artur Widak/NurPhoto via Getty Images)
Surprising as it may be, China is pulling down the rest of Asia. A slowing Chinese economy is impacting the overall trade numbers of IMF’s emerging economies & developing Asia category, which comprises of thirty Asian nations such as Malaysia, Indonesia, Thailand, Philippines and India. A bigger shocker is however the fact that Asian trade would have grown faster without China.
To put things into perspective, if the economic powerhouse is to be excluded from the trade calculations, Emerging Asian exports and imports would have recorded a 13% and 16% growth in the year 2017. Including China, these figures go down to 11.4% and 14.7%, respectively.
This is indeed abnormal because not long ago, China single handedly made Asian numbers look outstanding. While much of Asia is still import dependent and runs a significant trade deficit, negative current account balance has never been recorded since 2002. Without China, the picture changes drastically as current account balance remains negative by a huge margin over a 13-year period. However, things are changing now as Asia’s burgeoning trade deficit is not being effectively counterbalanced by China anymore. At $181.4 billion, the country’s trade balance is expected to be lowest since 2011 by this yearend; this is considering the fact that China’s share in Emerging Asian trade has now declined to 56.7%, lowest in six years.
Has China Peaked Out?
The year 2015 can be considered the peak of Chinese trade. The year saw China’s current account surplus breach the $350 billion threshold, recording a growth of over 60% compared to the previous year. During this year, the entire Emerging Asian region recorded a surplus of $329 billion as well. Conversely, when the Chinese numbers are not considered, this surplus turned into a deficit of nearly $28 billion for Emerging Asia.
Post 2015, considering the years 2016 and 2017, China’s surplus declined to $234 billion and is expected to hit a sub $200 billion level this year. This results in further deterioration in overall Emerging Asian terms of trade as deficit will widen to nearly $129 billion this year.
While considering the economic expansion of the region, stark realities cannot be ignored either. The phenomenal growth of China over the years is an important component of Asia’s trade prowess. Between the years 2012 and 2014, Emerging Asia’s growth in current prices averaged 9.2% along with China but declined to 3.8% without the latter. What this means is that China added 5.4 percentage points to the Asian growth story. From then on however, there has been a divergence of sorts and Emerging Asia seems to be doing better than it did along with China. Comparing the same numbers between 2015 and 2018, there is a reversal, as Emerging Asia grew by just 6.1% while including China in its growth average. Nevertheless, without the Chinese numbers, this growth number increases to 7.3% on average. China is therefore taking away nearly a percentage point from Asia’s growth print.
Possible Reasons and Implications
One might argue that Chinese trade numbers are impacted because of increased competition from other countries as well as reorientation of its own economy from external to domestic consumption. In hindsight though, given the robust scale that the Chinese economy has acquired over the years, there is no real competition. Even the global growth has been very strong recently and the accompanied consumption demand should have ideally pulled up Chinese trade numbers consistently.
There is also a possibility that other Asian countries have in a way decoupled from the Chinese value chain. After all, the legendary electronics highway that spans between the semiconductor factories of Taiwan and high technology avenues of Seoul and Tokyo – passes through Shenzhen. Rerouting the chain would certainly disrupt set norms considering the fact that almost 65% of Asia’s trade happens within its confines and China has been at the center stage of it all.
The divergence between China and the rest of Asia is nonetheless a reality now and gives a fair bit of nuance regarding the things to come. If China is not driving Emerging Asia’s growth, then who is? Contributing 56.7% to the overall trade quantum, China remains the doyen of Asia but its falling share has to be taken into considering as well. Bearing in mind this reality, repercussions pertaining to the US-China trade war, China’s reducing trade surplus and global commodity volatility suddenly become even more pronounced for Asia now. Even though it is clear that China will continue playing a leading role in forthcoming decades, there are visible signs of realignment in Asia and this cannot be ignored.
|
c3cf945b9032a55adcbdd2f084f87104 | https://www.forbes.com/sites/karastiles/2018/02/09/how-this-neighborhood-gift-shop-keeps-the-lights-on-for-local-creatives/ | How This Neighborhood Gift Shop 'Keeps The Lights On' For Local Creatives | How This Neighborhood Gift Shop 'Keeps The Lights On' For Local Creatives
Sofi Madison’s inspiration for opening a neighborhood gift shop in Boston grew out of her perpetual search for one-of-a-kind products created by fellow community members. Specialty goods—hand-poured candles, locally-sourced honey or ceramics crafted by local artists, for instance—weren’t always easy to get her hands on and never seemed to be in one spot.
“They were rare, and it was quality over quantity,” explained Madison. “So as that type of consumer, I wanted to create that kind of space to highlight these small brands.”
By curating small-batch, local goods for large companies, Olives & Grace hopes to give independent... [+] makers an opportunity to “get on the desks of very successful companies.” Olives & Grace
In 2012, Madison opened Olives & Grace in Boston’s South End neighborhood to feature the small-batch gifts she often hunted for.
To help grow the retail footprint of creators who struggle to compete with larger brands, she also added a corporate service: curating locally-sourced gift boxes for Fortune 500 companies, hospitals, hotels, travel agencies and even movie sets to promote small businesses where they aren’t typically represented. She notched a Boston Magazine "Best of Boston" honor in 2014 and currently estimates that corporate sales have doubled annually.
Here, some of the creators featured by Olives & Grace share why shops like Madison’s are often the lifeline of the small business community:
ROBIN MARKLE, FLAMING IDOLS:
Her creations: Candles featuring figures of the LGBTQ community—like RuPaul and Ellen Page The small business struggle: “As an independent artist it's impossible to reach potential consumers to the same degree that a large corporation mass-producing a similar item can. Even if we are offering something unique or one-of-a-kind, people who would like our product but aren't in the habit of shopping on Etsy or going to craft fairs may never find us.” The Olives & Grace boost: “I have been able to make my business my full-time job because of selling at shops like O&G. Because they are curating a space with a variety of unique and handmade items, they're able to draw much more traffic than I do on my own.”
TAYLOR HAMILTON, TAY HAM:
Her creations: Playfully-illustrated greeting cards Entrepreneurial inspiration: “It's very empowering to watch another woman-owned business succeed. The mutual support is important—it encourages camaraderie in an industry that has the potential to be competitive.” The Olives & Grace boost: “They are the bread and butter. Big chain stores come and go; you can't base your business around them. Loyalty is hard to come by in this industry with so many new brands coming into the space. Stores like O&G keep the lights on.”
JOELLE MCNAMARA, BADALA:
Badala's products: Accessories and housewares crafted by women artisans across the world Global impact: “When I traveled to Kenya, I was approached by a number of women looking for jobs so they could break the cycle of poverty in their family and communities. We have since been able to give employment—either full time, part time, or contractual—to over 250 women, artisans and sex trafficking survivors in six different countries.” The Olives & Grace boost: “Stores like O&G are the backbone of our business. Not only are these partnerships where the majority of our sales come from, but they have also led to a number of our more high-profile collaborations like those with Toms and Target.”
|
3978ea459b57144bff758fc9a6bd057c | https://www.forbes.com/sites/kareanderson/2012/08/13/15-ways-to-accomplish-more-with-the-right-kind-of-humor/ | 15 Ways to Accomplish More With the Right Kind of Humor | 15 Ways to Accomplish More With the Right Kind of Humor
Conan O’Brian quipped that "Some people are saying that the reason Michael Phelps wasn’t doing so well for awhile was because he let himself get too out of shape. I just have to say that I have been watching the Olympics, and if that guy is out of shape, I have been dead for five years.”
Self-deprecating humor can pull others closer, even in unexpected kinds of work. Whether you are seeking support, selling, forging a partnership or even considering marriage, it can be a key tool to knowing if and how to proceed. The right kind of humor is the best lubricant to smooth your way in life, pulling in opportunities and friendship, as these 15 reasons and ways illustrate:
1. Discover how open they are to others’ ideas
After watching “dog whisperer” Cesar Milan, Paula Poundstone learned that, “when a dog is sniffing you, he’s gathering information.” She concluded that, “My dog is collecting an extensive dossier on me.”
How we evoke and respond to humor is one of the strongest indicators of how flexible, open and fun we will be with others. Using humor, you see how they view themselves and their world. That’s helpful information if you are thinking of collaborating with someone – or even considering whether to get to know them better. “A well-developed sense of humor is the pole that adds balance to your step as you walk the tightrope of life,” wrote William Ward.
2. Pull people closer
Evoke unifying humor. When your humor highlights what we have in common, you and I feel more like “us.” Joking with co-workers builds bonds. Look out for examples of unifying humor that spur an “us” feeling and see how you might craft some for your situation. Here are three I’ve discovered:
• After the mad cow scare, a subscriber to my newsletter, mailed me this bumper sticker: “Montana – At least our cows are sane!”
• Commenting on the human condition: “God pulled an all-nighter on the sixth day.”
• Emblazoned on the T-shirt of a rotund man coming out of a San Diego beach shop: “The problem with the gene pool is that there is no lifeguard.”
3. When you are the honcho, hero or current center of attention, let them feel more equal
Self-deprecating humor is disarming, and makes others feel more included, as hockey player, Chris Pronger managed to do when talking with reporters. That’s especially helpful when others may have reason to feel in awe of you or ignored.
Groucho Marx dry groused, “People say I don't take criticism very well, but I say what the hell do they know?”
Steve Martin observed, when sharing this photo, “When comedians get together, there are thoughtful moments, too.
4. Don’t use cutting or belittling humor
Most of us rationalize our use of cutting humor as harmless fun. After all, it is usually a matter of perspective, that is who is getting skewered. As Mel Brooks concluded, “Tragedy is when I cut my finger. Comedy is when you fall down an open manhole cover and die.” Unifying humor is healing and enables us to see the larger picture where hope is possible. Charlie Chaplin once said, “Life is a tragedy when seen in close-up but a comedy in long shot.”
5. Avoid humorless people
Frank Tyger suggests that, “The ultimate test of whether you posses a sense of humor is your reaction when someone tells you you don’t.” How do we get that way? “By starving emotions we become humorless, rigid and stereotyped; by repressing them we become literal, reformatory and holier-than-thou; encouraged, they perfume life; discouraged; they poison it,’" warns neurologist, Joseph Collins.
Without humor it is hard to step back to see a situation in a brighter way or come to terms with it – or to hope. “There is a sorrow in the seriousness of humorous people. They do not easily find among ideas or purposes a place of rest. The courage in their eyes is wistful. If they don’t even recognize sarcasm, they may lack higher cognitive skills.
6. Get their attention
“When they’re laughing, they’re listening,” said Adrian Gostick, co-author of The Levity Effect. “A tourist is backpacking through the highlands of Scotland and he stops at a pub…” says Toy Story filmmaker Andrew Stanton in a thick Scottish accent, thus beginning his TED talk on storytelling without any preamble – but with a story, one that ends in a laughter-evoking evoking punch line.
Not only did he grab the audience’s attention from the first sentence, he got them to care about what he would say next.
He’d also crafted that funny story to foreshadow all the clues to storytelling that he subsequently describes in his talk. See how he alternates humor and other emotions, from awe to surprise, throughout the talk, to keep us involved.
7. Help them become more relaxed, present and connected in situation
Like scent, humor has extremely offensive or captivating effects on us, depending on the kind. Injecting unifying humor into a situation is probably the swiftest way to get us in relaxation mode and begin to bond. We become less fearful or tense. That’s when we are most likely to like each other, bring out our better sides – and be productive and creative together.
“If you can get someone to laugh with you, they will be more willing to identify with you, listen to you. It parts the waters,” said Robert Orben. As we lighten up we become more playful – which can make us productive if we need to be – and happier.
8. Choose the character role you really want to play
Using humor, you can show others how you choose to see a situation, as weak or resilient, for example. Instead of giving in to depression, a Multiple Sclerosis patient remarked, "One good thing about MS is I don’t have to worry about stirring my coffee anymore.”
With his hilarious monologue, The The Impotence of Proofreading. Taylor Mali proved that English teachers can be wildly entertaining and even turn into traveling poets: “Proofreading your peppers is a matter of the the utmost impotence…I need to be challenged, challenged menstrually…...I need a college that could give me intellectual simulation. Not just anal community colleague. I really felt that I could get into an ivory legal college…. Gone would be my dream of going to Harvard, Jail or Prison …There is no prostitute for careful proofreading.”
Humor makes most any topic more interesting.
9. Lift the mood
When characterizing his political commentary, Stephen Colbert noted “You can’t laugh and be afraid at the same time.” “Humor does not rescue us from unhappiness,” wrote Mason Cooley (or from arguments I would add), “but enables us to move back from it a little.”
Humor also makes us think and that interrupts negative emotions writes Eric Barker, summarizing a study.
It can even raise our tolerance to pain. When researchers showed people funny videos before asking them to keep a hand in very cold water for as long as they could, participants could keep their hands in longer than those who watched a neutral or negative video.
10. Make your potentially controversial idea or view easier to hear and discuss
Like “a rubber sword, humor allows you to make a point without drawing blood,” wrote Mary Hirsch. Yet that only works when the human is self-deprecating or otherwise unifying.
11. Diffuse tension
Wernher von Braun recalled that as astronaut John Glenn was strapped into his seat before take-off, he dryly remarked, “Oh my god, I’m sitting on a pile of stuff created by the lowest cost bidder.” Every relationship has bumpy moments. Humor can be quicker than praise to smooth them out. Humorless people make the bumps bigger. “A person without a sense of humor is like a wagon without springs,” wrote Henry Ward Beecher, “jolted by every pebble in the road.”
12. Spark romance
Women say they want someone who makes them laugh. Men want someone whom they can make laugh.
13. Spur higher performance
Humor is often the seed for fun. If people are having fun together, they’re going to work harder. They are more likely to get through the rough spots of disagreement and go out of their way to support each other. Some studies show men are viewed as funnier at work yet, Humorworks’ John Morreall points out that, “traditional men’s humor tends to mock and humiliate and uses jokes with punchlines, while traditional women’s humor expresses support and solidarity and uses true stories without a kicker.”
Yet Bloomberg’s Vanessa Wong points out that “Self-deprecating humor—traditionally women’s humor—is actually best at work, he says, as it’s not threatening, and no one actually thinks less of a person for it.”
And, “typically, women are not connected to their funny selves” according to Marcia Reynolds, whose improvisational acting teacher told her, “Don’t try to be funny. Don’t look for funny stories.” Just tell your true stories. “My funny self would show up.” Reynolds summarized comedian Steve Allen’s advice, that, “a regular diet of watching, reading, listening to and hanging out with funny people, you will inevitably become a bit funnier.
Eventually, you become a magnet for funniness. ‘Humor will find you,’ Allen said. ‘It’s not that funnier things will happen to you than others. You’ll develop a sensitivity to the environment and circumstances that enables you to see the humor that a more serious person will miss.’”
14. Get promoted faster
Executives with a sense of humor climb the corporate ladder more quickly and earn more money than their counterparts,” reported The Harvard Business Review.
15. Be a great audience
Since it is contagious your laughter helps spread the humor. Don’t hold back laughter because your laughter Unfortunately I don’t know how to be funny nor have a humor-evoking face like Ricky Gervais – yet I am one of the first to laugh when others are. Us “first responders” to friction can start what researchers call a laughter cascade to spur the emotional contagion that gets others laughing. Once I broke out laughing in a packed movie theatre only to hear someone yell out, to my mortification, “Kare – glad to hearing you’re enjoying it.”
What actually makes us laugh? Scientists think they have found three clues: superiority, incongruity and the pattern of three.
See also:
Are Funny People More Successful In Business? by Jenna Goudreau
8 Tips for Using Workplace Humor by Mike Myatt
Five (Serious) Tips for Using Humor to Connect, Engage and Influence by Mark Ivey
|
c95eb2eeec98355b959429353d2dcf28 | https://www.forbes.com/sites/kareanderson/2013/04/01/surprising-secret-to-a-satisfying-and-successful-life-with-others/ | Surprising Secret to a Satisfying and Successful Life With Others | Surprising Secret to a Satisfying and Successful Life With Others
Tension was inevitable. The stakes were high. We all wanted to be accepted into this coveted fellowship program, yet only 20% would. After a series of intensive interviews, much depended on how we played this game.
The rules were odd. Eight strangers, all high-achievers, were seated at a round table, with five flat cardboard pieces in various shapes in front of each of us. Ten other just-formed teams of applicants sat at different tables in the same large meeting room. Observers with notepads were standing right behind us around each table.
Become More Beneficial With Savvy Prosocial Support
When the bell rang we were to give a piece to someone else in the group who would then be expected to give some piece back. We could not ask for a certain piece from someone else. All of us could be giving and receiving at the same time. The goal of each team member was to create a complete triangle shape out of the pieces they received. The winning team would be the first one in which every member had a completed triangle of pieces in front of them. Soon after the bell rang one team member was grimly grinning at me as she took one of my pieces and gave another back. While not violating the rules like her, most of us were also looking at everyone’s pieces to find the ones we needed.
Yet the man on my left was on a different path. He carefully looking around at each teammate’s set of pieces and then at his own. He would then give one of his pieces to someone, and put the one he received to one side. I was slow to understand his strategy but when I did I felt a surge of warmth towards him and imitated his approach. You see, instead of figuring out how fast he could complete his triangle by pulling the right pieces from others, he was helping them complete theirs by seeing which if his pieces would help each of them. Inevitably the leftovers in front of him would eventually form a triangle too.
Give and Become Sought-After
Our “team” won because of him, and I am certain I got accepted to the program on his coattails of connective leadership. Meeting him was a life-changer for me. That was years ago and Jim remains a hero and a friend of mine to this day. Organizational psychologist and Give and Take author Adam M. Grant would call him a successful giver and he has certainly proven to be.
Jim is widely admired and sought-after in many realms of work and life. From his studies, Grant would say that Jim, with practice in strategically helping others, has strengthened that selfless “giving muscle” we all have, thus also boosting his willpower and focus, becoming more productive in the use of his time and energy. (See the chocolate cookie/handgrip squeeze test described in Grant’s book).
Not all givers are successful. In fact some are the least productive, most unhappy people, according to Grant’s research. Most of us learn that lesson the hard way, and keep re-learning it.
Become the Kind Of Giver Who Gets the Most Success and Satisfaction
The priceless core lesson of Grant’s extraordinary book (my favorite on behavior since Quiet) is that we can become successful and lead a satisfying life with others if we learn the right way to give. This talented, widely-liked and introverted social scientist divides the world into givers, takers and matchers:
• The majority of us are givers, according to Grant, yet “are overrepresented at both ends of the spectrum of success.”
• “Takers seek to come out ahead in every exchange; they manage up and are defensive about their turf.
• Matchers expect some kind of quid pro quo, “with a master chit list in mind.”
What makes some givers successful and sought-after is that they have both a deep, evident caring for others, yet they also attend to their own self-interest. They are not “doormats.” Grant cites three relevant behaviors for being productive, happy givers:
Be judicious about giving to takers Give in ways that reinforce and support your most vital relationships. (You can’t serve everyone extremely well and care for yourself) Consolidate your giving into chunks of time with an individual or group so your support has a more substantial, meaningful impact
From my experience a fourth point is also vital to delivering the most helpful value for others, and yourself:
Recognize the Need to Feel Needed and Connected
In art as in life it is often a matter of where you draw the line, the saying goes, and to succeed at work you need to draw a line to create healthy boundaries. Sacrificing your precious time with closest friends, colleagues and family members because you are devoting it to too many others may not be judicious choice for the self-care that Grant advocates.
As Susan Dominus observed in her New York Times article, Grant has a traditional marriage where “his wife “who has a degree in psychiatric nursing, does not work outside the home, devoting her time to the care time of their two young daughters and their home” and “works at least one full day on the weekend, as well as six evening a week, often well past 11.”
As an alternative model of healthy giving that reflects Grant’s definition of also taking care of oneself and “chunking” the helpful time with others, serial investor, Brad Feld has often written about how he gives and sets boundaries, becoming a role model in productivity. Feld helps many in the locally-based TechStars start-up communities, the start-ups in which he and his business partners invest, and boards on which he sits. He also scales his knowledge in his blog and co-authored books, and by providing open “office hours” to help most anyone.
In his self-caring approach to giving, he resolutely and publically sets aside specific vacation and other times with his wife, and for visiting with his parents, and closest friends – and for reading and running. A core theme running through Brad’s approach is connective, collective giving. That often means apt teams help others. This models behavior for those who receive to emulate, spurring them to enjoy the camaraderie of collectively giving, using their complementary talents with and for others an each other.
Help Others to Become More Helpful
We can feel that heady, immediate hedonic high each time we help someone who seeks our advice or an introduction, yet there may be surer ways to both support others and ourselves while also spurring them to emulate the giving behavior they receive. Those who continue to keeping getting the help they ask for, without any explicit expectation of reciprocity, may become habituated to asking for help; and thus inadvertently be turned into takers. Here are three models that I have experienced that spur a natural balance of give and take:
Give for the Greater Good of Our Team
The triangles game gave visceral proof of the winning power of smartly giving to “our” team for the greater good of all the team. Whenever a team or organizational culture explicitly recognizes and rewards individual giving to the group, individuals seem to become more frequent and adept givers:
• Gore and Saddleback Church are frequently cited as examples of the connective, giving power of small, strong, inter-connected teams or groups within a larger organization.
• The specific rules of engagement of how Quantified Self members share self-monitoring experiments in their Meet-ups has enabled that self-organized group to scale global participation and innovation so rapidly and well that several universities and companies have sought them out as research partners.
• Mutual support communities thrive when they are centered around a strongly-felt, shared interest. Consider the giving behaviors, for example, in 12-step programs or groups for cancer survivors or avid cyclists. The popularity of these groups and the loyalty members feel to each other and their group, illustrate how we will generously give apt advice and help, not seeking a quid pro quo, when the shared mission, giving and camaraderie is evident.
• Other kinds of groups with explicit norms and rules to reinforce mutuality of benefits tend to spur greater sharing. They include MasterMind groups of peers or led by an expert such as Vistage groups,
Give Before Asking (We All have Something to Give)
My friend Paul Geffner, has started several successful business with friends, from Captain Video (the first video rental store in S.F.) to Great Escape From New York Pizza. He is renowned for his generosity in giving advice to others who want to start their own business, yet one early lesson has held him in good stead when choosing those to help most. His first gig was selling finely made leather journals on a sidewalk near The Embarcadero, one of a string of street vendors on the same block. Some passersby would pause, pick up a journal, hold it and beging asking question after question about how to successfully start a street vendor business, sometimes even interfering with those who were attempting to buy a journal. Others would approach, look carefully at the selection, buy at least one and then ask how he got started, and what it would take. Which kind of person would most engender in you the desire to be helpful?
In attempting to emulate the kinds of giving I cite above, I may put myself somewhere in between the "giver and the “matcher” categories yet it appears to yield benefits. I have more time to recognize, maintain and savor my self-interest, as Grant advises, and to:
• Spend time with my dearest friends and family
• Hone my top talent, often around a sweet spot of mutual interest in a team of people with disparate, complementary talents where we can spur greater mutually learning and greater accomplishment together than we could on our own. For me, those experiences of shared giving and doing together create a more sustainable satisfaction, rather than the hedonic high moments which we -- and those we help -- can take for granted over time.
Use Your Best Talents With Others Who Are Too
Some of the most successful and satisfying times in my life have involved working with people from very different backgrounds, who same the same situation from a different perspective and who could do things well that flummoxed me. Here are two examples:
1. After giving a speech at a corporate conference, three self-described “analytics geeks” approached me about crafting a description of the forecasting tool they invented in a way that would grab the attention of their company CMO. After hearing their avid attempt to describe it to me I was intrigued and readily agreed to help. I suggested that we aim, instead, at convincing the CFO of its value to the company, as her support might make their tool even more credible to the C-suite. To appeal to the CFO, I enlisted the support of a generous friend who’s been a CFO at three firms. We all met for an afternoon and on into an evening (a chunk of time), as it took awhile for us, from our diverse professions, to actually understand each other. Yet time flowed by quickly as it was exhilarating to co-create the message for such an interesting project. We were all learning from each other. Not only did the three analytics pros prove successful in securing the support from the CFO, they have been helpful to me in my subsequent work with two start-ups. Giving in these ways reinforces the possibility that giving boomerangs back for all givers. It also boosts the chances that serendipitous support appears when you most need it sometimes – and that your giving is appreciated.
2. Because I frequently speak at conferences, a friend of mine asked me to attend a Toastmasters Club at San Quentin Prison, hoping I might have insights for the prisoners. It was a humbling experience. All of the presenters were more articulate, polished and passionate about their topic than me. And they were humble, helpful and respectful of each other. When the club leader asked me to offer them some helpful tips I acknowledged my awe of their skills and suggested that I could learn from them, since they were already better speakers than me. What if I provided role-playing coaching for them for the job interviews or other outside-of-prison situations they will face one day? I knew, in working with individuals whose life experience was vastly different than mine, that I could strengthen my core interest, connective communication skills, and I did. Two-way, simultaneous giving is extremely satisfying. As the Sufi saying goes, "God makes only co-equal partners."
Feeling the connective power and support in giving in these two ways may also enable us to avoid the sometimes tragic consequences of “merging identity and work” described by CEO coach Jerry Colonna.
Grant Packs Many Actionable Insights Into Give and Take
His many thoughtful insights on productive giving can help us hone our approach. This is one of those books you will find yourself underlining every other sentence before giving up and recognizing it will become a handy guide to which you will return and re-read as situations crop up. All nine chapters were sequentially helpful, including these topics: How Givers, Takers, and Matchers Build Networks; How to be Modest and Influence People; Why Some Givers Burn Out But Others Are on Fire; and Overcoming the Doormat Effect. As Grant noted on his Facebook page (generously citing others, of course), “Ultimately, I focused on success because there has been surprisingly little written about how helpfulness influences productivity, work quality, promotions, and other objective measures of achievement and performance in organizations. By contrast, there are quite a few excellent books that deal with giving and happiness (see The How of Happiness by Sonja Lyubomirsky, Happy Money by Elizabeth Dunn and Michael Norton, The Happiness Project by Gretchen Rubin, and Why Good Things Happen to Good People by Stephen Post and colleagues).”
My Core Truths About Giving
1. Giving That Scales Serendipitous Opportunities
If you give enough other people the helpful support they need when they most need it, you often get helpful support when you most need it, sometimes even before you know you need it, and sometimes from those you didn’t know could provide it.
2. Giving That Takes Away Energy
• Some individuals give and give and give to you to fulfill their need to be known as caring people. They are not grounded in their giving so they can seem like unintended takers.
• One of the most uncomfortable situations is to have unhelpful help heaped on you by someone who will grow increasingly resentful that you aren’t returning the favor in equal measure. They are matchers disguised as givers.
3. Giving That Supports Our Best Sides
Two of the most satisfying ways to strengthen your core talent, using it more frequently with others are to:
1. Cultivate mutual-learning, supportive friendships with others of the same or similar talents
2. Collaborate with those who have complementary talents that dovetail with yours, working on projects that reflect sweet spots of strong shared interest.
Find Many Magnificent Givers and Researchers in Give and Take
We learn from model models, of course, and Grant embodies the core message of his book by shining a generous spotlight on many amazing individuals in his book including the two Adam Rifkins, Frank Flynn, George Meyer, Donald MacKinnon, Francesca Gino, Reggie Love, C. J. Skender, Dov Eden, Margaret Clark, David Hornik, Daniel Coyle, Angela Ducksworth, Stu Inman, Barry Staw, Dave Walton, Bill Grumbles, James Pennebaker Jason Geller, Nick Rackham, Alison Fragale, Katie Liljenquist, Larry Walker, Christina Maslach, Howard Heevner, Olga Klimecki, Tania Singer, Conrey Callahan, Vicki Helgeson, Leslie Perlow, Netta Weinstein, Richard Ryan, and Adam Galinsky.
|
c25edb6df5ab8f40bfc55a765558c204 | https://www.forbes.com/sites/kareanderson/2013/08/17/double-up-on-happiness-and-meaning-with-others/ | Double Up On Happiness And Meaning With Others | Double Up On Happiness And Meaning With Others
In the longest study of what led men to live happily and successfully to a ripe old age, guess what trait they were most likely to have in common? The lives of 268 men who entered Harvard College in 1937 were tracked for seventy-two years in The Harvard Study of Adult Development. Those who made it into middle and old age as “happy” and “healthy” shared seven traits: mental adaptability to changes in life, advanced education, stable marriage, not smoking, not abusing alcohol, getting some exercise, and maintaining a healthy weight.
Yet one factor was more important than any of these: Attention to relationships.
When the primary investigator on this study for over 40 years, psychiatrist, George Vaillant, was asked, “What have you learned?” he quickly answered, “that the only thing that really matters in life is your relationships with other people.” It was the social life of these men, he said, “not intellectual brilliance or parental social class” that led to their living successfully to a ripe old age.
The creator of Wharton’s popular “Success Course,” G. Richard Shell offers an approach to finding success that can lead to happiness, with others, in his new book, Springboard. Beyond cultivating close relationships, to feel happy, Shell cites the ineffable feeling one’s doing the right thing, described in two ways:
1. Eudaimonia is what “Arististotle called the spirit of goodness or the good that we seek for its own sake and not for the purpose of achieving any other good,” according to Shell.
2. Simcha is a Hebrew word with many interpretations, Shell notes, “from simple joy and satisfaction, to the feeling of spiritual exultation” or what Rabbi Akiva Tatz calls “The experience of the soul that comes when you are doing what you should be doing.”
Double Up on Happiness And Meaning
What if we could behave in ways that enabled us to combine both the cultivation of close relationships and eudemonia /simcha experiences? Here are three possible ways – and I’d love to hear your ideas.
1. Use Best Talents With Others On What Matters Most
Forge activity-based relationships with individuals who are radically different than you in background, temperament, beliefs, expertise or other ways, yet share a strong sweet spot of mutual interest. Working together on something mutually meaningful increases the chances that you’ll accomplish something remarkable together that you could not do on your own. Eudemonia may happen more often.
Even better, as Shell notes, “When it comes to gaining wisdom, negative emotions have a place of honor right next to positive ones.” “The price of enlightenment seems to be suffering, not smiling.” Since those radically different than you inevitably won’t act right, you get a priceless opportunity to see your biggest hot buttons as you react. You can practice turning moment of potential miscommunication or friction into opportunities to speak to each others’ good intent -- and feel the satisfaction of “doing what you should be doing.”
2. Experience The Freedom of Agreed Upon Constraints
Be a part of a regular tribe that is both bounded and unbounded. That means they have agreed-upon ground rules, from the structure of their meetings to the explicit, mutually beneficial ways they share and collaborate. Yet they are also able to experiment, learn faster from each other, propose changes in how they operate and evolve. Such groups are as diverse as Quantified Self, Rotary International, Y Combinator and mastermind groups. “In a world of constant flux where our skill sets have a shorter life” we can thrive as we hone our capacity for flexibility and play in situations that are both bounded and unbounded according to A New Culture of Learning co-authors, John Seely Brown and Douglas Thomas.
3. Start Your Success Path by Recognizing Your Personality Strengths
Shell incorporated into one test for his class elements from many tests including Myers-Briggs Type Indicator, VIA Survey of Character Strengths, and StrengthsFinder. He came up with four dimensions of your personality, “that most directly relate to success”:
• Your attitude about other people: Social Styles
• Your drive to achieve: Action Orientation
• Your inclination towards intellectual or creative activities: Mind-sets
• Your emotional response system: Emotional Temperaments
In taking the test, which you’ll find in his book, you can view sixteen personality traits so you can characterize yourself along these four dimensions. Plus each step on the journey toward defining the success that will give you happiness, Shell provides apt self-assessments and other exercises. It’s clear this course has been a long-time, passionate interest of his that has given him the sense of meaningfulness he hopes others will feel in following this path.
“In the long run, we will succeed because the people who truly matter in our lives will appreciate us for who we really are, not who we are trying to be,” Shell told, historian and writer, Steven Ujifusa.
If you’d like to read more on these intertwined topics see The Ultimate Litmus Test, What To Do Before Happiness Happens In Your Life?, Want To Buy Happiness?, 11 Steps To Happiness At Work, and The Happiest Jobs In America.
|
0f8f04adf2348b0f7e337e0f6f2e1930 | https://www.forbes.com/sites/kareanderson/2013/09/14/leverage-good-news-to-pull-others-closer-and-feel-better-together/ | How To Pull Others Closer And Feel Better Together | How To Pull Others Closer And Feel Better Together
These three true stories share a vital trait that you can adopt to boost your mood -- and your value and visibility with others -- as an individual and for your organization: Thief Apologizes And Returns Money To Nashville Market 11 Years Later, How Google Maps Led To the Rescue of A Los Angles Stray Dog, and Valentine's Gesture From Dead Husband To Wife Will Make You Melt.
The common trait? They are uplifting good news stories.
As Huffington Post has discovered good news coverage is a gold mine, which would not surprise Contagious author, Jonah Berger who found that we are most likely to share good news on social networks. The percentage of referral traffic from social channels to Huffington Post’s Good News section is much greater -- almost three times more -- than the amount of social referral traffic to their overall website. These stories boost readership, engagement and advertisers’ interest.
And Huffington Post’s Good News Facebook page already has almost 40,000 followers. As their Editorial Director of Social Impact Platforms, Riddhi Shah, enthused to me, “We've seen a lot of growth in the last two months as we've increasingly focused on follower engagement -- we ask them questions, ask fans to send us photos, share inspirational quotes, happy facts of the day etc. We've transformed it from just a place for news updates from HuffPost Good News to a destination for sharing stories, insights and quotes that inspire awe -- and sometimes spur others to emulate those actions and share them afterwards.”
1. Flourish Holding The Three-Faceted Gem of Sharing Good News
Sharing good news generates three nourishing benefits. You boost happiness and inclination towards acting in good will in yourself and in those who see the story, plus you shine in the reflected glow of the story you share.
As creating and sharing good news becomes a habit you may move beyond the momentary hedonic highs to a more enduring mood of eudaimonia. Seeing good news on television, for example, lifts one’s mood, according to Michelle Gielan, founder, of the Institute for Applied Positive Research, who speculates that watching such news on an ongoing basis can have a more prolonged effect. And I speculate that becoming a champion of good news sharing can make you a magnet, pulling others closer and bringing out their better side and yours, when around each other.
2. If You Are On The Look Out For Good News You’ll Find More
Helpfulness counts as good news and is an indelibly credible way for others to learn more about your organization. Keep an eye out for situations where your customers, employees or vendors create unexpected moments of happiness for others. They may discover how a practice or device in one situation could help yours, provide over-the-top help, respond heroically in a dire situation, or offere a valuable partnership or other opportunity.
“Ah” and “aha” -generating news can come in many forms. For example, Kevin Dutton vividly describes situations in Split-Second Persuasion where someone instinctively and instantly says something or takes action that turns a potentially volatile situation into a moment for collective bonding. People in those situations can’t help sharing how they felt.
3. Make Your Good News Especially Memorable.
Get specific sooner. Notice how the HuffPo headlines cited earlier had specifics like Nashville and Valentine? First tell the story then cite the inherent takeaway lesson that can spur others to emulate the good behavior. Hint: the specific detail proves the general conclusion, not the reverse. That’s why these stories are powerful specificity engines upon which you can speed others’ sharing of your core message memorable.
Tie your engagement-inducing good news sharing to a holiday, specific positive emotion or explicit goal such as spurring camaraderie among your customers, constituency or online community. For example, HuffPo sought to encourage kindness during Thanksgiving, inspire gratitude and help our readers feel closer to each other.
4. Facilitate Bragging Rights: Help Others Look Good When They Participate
Provide multiple ways others can respond and add to the good news. In so doing you are creating what Tell to Win author, Peter Guber, calls a purposeful narrative where others see a role they can play in the story, and add to it as they do. Huffington Post, for example, makes it easy to comment on the story and when some comments involve a related story, they sometimes reach out to involve that reader in a separate follow-up column. Plus readers can see what friends of their from other social channels have liked or commented on a story.
And, in one click, we can Tweet a story we like. Hint: How can you reduce the steps it takes for others to share your good news stories?
What other companion categories of stories make you feel good in sharing? For example, Huffington Post launched a Third Metric section to cover diverse examples of “redefining success beyond money and power” from Sue Parks: Why Exercise Is a Great Way to Boost Your Bottom Line to Why This Banker Quit Wall Street to Become a Monk and Improve Your Life by Improving the Lives of Others. By hosting a conference on the theme they created a further way for people to bond around the topic and “brag” about their favorite stories face to face.
Enabling Others To Use Best Talents In Doing Good Is Doubling Happiness
For an inspirational example of an all-volunteer, scalable generator of good news, see ServiceSpace’s KarmaTube where individuals use technology to take collective action on specific projects for the greater good, and learn from each other so they can adapt those projects to other situations.
Like Quantified Self and Shareable the organizational model makes people feel good about participating because they know they are using best talents together on worthy efforts. Such models raise the bar of expectation as we view where we choose to contribute. Mutuality matters.
Just remember, as Edward R. Murrow once said, “We cannot make good news out of bad practice.”
|
1a59560fed1e17819c7dff086064bf9d | https://www.forbes.com/sites/karencahn/2018/02/20/its-all-about-demand-and-supply-kid/ | Crowdfunding Can Help You Gauge Demand Before You Invest In Supply | Crowdfunding Can Help You Gauge Demand Before You Invest In Supply
Reid Miller Apparel Reid Miller
One of the first concepts you learn in economics is supply and demand. For entrepreneurs, investing in the right amount of supply is imperative or you could find yourself drowning in product that you can’t sell. But how can you efficiently suss out if there is a market for your product without having the actual supply?
This is the genius of crowdfunding that nobody tells you about. Crowdfunding allows you, the entrepreneur, to assess demand before you invest years of your life, tons of your own money, or worse, take out a loan and go into debt, investing in supply that may or may not sell.
Crowdfunding turns traditional business economics on its head. It is an extremely efficient, low risk opportunity for entrepreneurs to step outside of their comfort zone, test out their ideas, and best of all - generate demand before investing in supply while raising capital for your business.
Many of the startups on iFundWomen, our crowdfunding platform, are in the very early stages and their campaign is one of the first times they’re putting their idea out there to be judged. As we embark on year two of helping women-led businesses get funded, we’ve seen creative and compelling strategies from entrepreneurs, working to iron out their proof of concept, which can easily be applied to getting your idea off the ground.
Build passionate customers by involving them in the product development process.
Deborah Owens, a financial services expert and wealth coach, is crowdfunding to develop an app called WealthyU. WealthyU is the Weight Watchers for paying down debt, saving and investing. It provides users with the tools and education they currently lack to take control of their finances. Not only has Owens raised almost $30,000 to-date, but she has also accumulated a community of beta testers that she can rely on for feedback once her prototype is ready. Tapping into her customer base to gather feedback ensures she is building a product that meets her customers needs, and involving them in the product development process gives her consumers a deeper connection to her brand.
Generate demand for your initial product with limited quantities
Sat Nam Babe is a socially conscious line of play and yoga clothing for kids - infants to age five. Founder Jennifer Coulombe set out on a mission to raise $10,500 to produce Sat Nam Babe’s first line and build the company’s e-commerce site. Coulombe built excitement for her brand by messaging her intent to launch through crowdfunding in the months leading up to her campaign. The focus on limited quantities created a feeling of exclusivity and curiosity around her brand. Her reward strategy smartly included letting backers be the first to get their hands on her products. Those early customers helped fund the launch of this sustainable clothing line and they’ll be her company’s first brand ambassadors.
Prototype a new production model with the funds from your campaign
Reid Miller Apparel wants to create a production model for U.S.A. custom-made womenswear and she’s using her crowdfunding campaign to test the process. Miller was tired of being unable to find clothing that fit, so she raised $15,000 to test the new model on five beta clients. Once the production model is perfected, she’s ready to custom make 100 riding jackets for customers. The genius behind Miller’s idea is that all of the data she’s been able to obtain from her campaign can be used to show potential investors that women truly want better fitting clothing.
The reality is almost half of startups fail due to a lack of product-market fit. Crowdfunding gives you the opportunity to fail fast and fail cheap. Learning quickly that nobody wants your product may be a difficult pill to swallow, but knowing when to pivot is a necessary skill for successful entrepreneurs.
|
fafa92808e337a9130f58f6f9380ac63 | https://www.forbes.com/sites/karenhigginbottom/2014/05/14/employees-feeling-included-at-work-builds-high-performing-teams/ | Employees Feeling Included At Work Build High-Performing Teams | Employees Feeling Included At Work Build High-Performing Teams
Employees who feel more included at work were more likely to report going above and beyond the call of duty, according to a global report published by Catalyst.
The Inclusive Leadership: The View From Six Countries report surveyed more than 1,500 employees from Germany, Australia, China, India, Mexico and the United States.
The study revealed that employees, both men and women, who felt included were more likely to suggest new product ideas and ways of getting work done. It also found that when leaders created an inclusive culture that led to improved team productivity and innovation when they made employees feel valued for their different talents and experience, while still fostering a team spirit based on common goals and attributes.
Catalyst defined ‘inclusion’ as when employees perceived they were both similar and distinct from their co-workers. ‘Belongingness’ and ‘uniqueness’ were key ingredients for inclusion in most countries, meaning that people wanted to stand out from the crowd but not too much.
“With DAX-listed companies potentially required to fill at least 30% of their supervisory board seats with women by 2016, it’s imperative that organizations take measures to make workplaces more inclusive, so that women, and all team members, can advance and prosper,” said Allyson Zimmermann, senior director, Catalyst Europe.
There were variations between the countries of the impact of inclusive workplaces on innovation and team citizenship (colleagues being supportive of each other). In China, employee perceptions of inclusion accounted for 78% of innovation and 71% of team citizenship. In Australia and the United States, employee perceptions of inclusion accounted for 19% to 22% of innovation, according to the Catalyst report. The only exception was India where employees didn’t differentiate between belongingness and uniqueness but saw them as two sides of the same coin.
In addition, the report identified four leadership behaviors that predicted whether or not employees felt included. Humility was one of four altruistic leadership skills that helped employees feel more included in the workplace, in all six countries studied. Inclusive leaders believe their main obligation is to support and assist direct reports through:
Empowerment-Enabling direct reports to develop and excel Humility –Admitting mistakes; learning from criticism and different points of view and seeking contributions of others to overcome limitations Courage-Putting personal interests aside to achieve what needs to be done and acting on convictions and principles even when it requires personal risk-taking. Accountability-Demonstrating confidence in direct reports by holding them responsible for performance they can control
“If leaders are going into different countries then they don’t need a different toolset for each country,” commented Zimmermann. “When it comes to inclusive leaders, it’s the same across the board. What really creates inclusive teams is having leaders with those four traits.”
|
eb104bcf6d3b69b4015547fa981daec2 | https://www.forbes.com/sites/karenhigginbottom/2014/10/21/employees-working-in-offices-with-natural-elements-report-higher-well-being/ | Employees Working In Offices With Natural Elements Report Higher Well-Being | Employees Working In Offices With Natural Elements Report Higher Well-Being
Employees who work in environments with natural elements reported a 13% higher level of well-being and are 8% more productive overall, according to a report of 3600 workers in eight countries in Europe, Middle East and Africa (EMEA), commissioned by modular flooring experts Interface.
The Human Spaces report led by organisational psychologist Professor Sir Cary Cooper found less than ideal working conditions for EMEA employees. Two fifths of EMEA office employees have no natural light in their working environment, over half don’t have access to any greenery in their working environment and 7% of EMEA workers have no window in their workspace. Spain reported the highest number of office employees with no window (15%), and also had the most stressed workforce. In contrast, Germany and Denmark reported the least number of workers with no windows (2% and 3% respectively), and had the happiest workforce.
With nearly two-thirds (63%) of EMEA office workers now based in either a town or city centre and spending on average 34 hours per week in the office, their interaction with nature is becoming increasingly limited, the report argued. Despite city-dominated lives, the research found workers have an inherent affinity to elements that reflect nature.
Interestingly, 40% of workers across EMEA said they would feel most productive at their own desk in a solitary office, while 31% would feel most productive at their own desk in an open plan office. Flexible working was a surprisingly low preference, with just 11% of workers choosing a space that suits their needs as their productive way to work.
“The work environment has always been recognised as essential to employee well-being and performance but often purely as a ‘hygiene factor’,” remarked Cooper. “The report clearly illustrates the connection between the impact of working environments and productivity. It’s no coincidence that the most modern employers now take a new view, designing environments to help people thrive, collaborate and be creative. Being connected to nature and the outside world, biophilic design, to give it its real name, is a big part of that.”
The research findings have implications for design in the office space, according to Mandy Leeming, design and development manager (UK) at Interface. “Contact with nature and design elements which mimic natural materials has been shown to positively impact health, performance and concentration, and reduce anxiety and stress. When it comes to creating office spaces that achieve this, it’s about taking the nuances of nature that we subconsciously respond to, such as colors and textures, and interpreting them. Ultimately improving the well-being, productivity and creativity of the workforce is key to the success of market leading organisations.”
EMEA workers listed the following top five natural elements on their wish list for their ideal office space:
Natural light Quiet working space A view of the sea Live indoor plants Bright colours
|
43c71460b21594ba016045aba52658ce | https://www.forbes.com/sites/karenhigginbottom/2014/12/05/the-perils-of-the-office-christmas-party/ | The Perils Of The Office Christmas Party | The Perils Of The Office Christmas Party
The office Christmas party is fraught with perils usually due to the over-consumption of alcohol. There is the danger of having had too much to drink and telling your boss that you hate them or posting inappropriate ‘selfies’ of you at the party all over social media. Now, a survey of more than 1000 UK workers and managers by the Institute of Leadership and Management (ILM) reveals which misdemeanours at the Christmas party cause the greatest unease when you return to the office.
So what are the consequences of the office Christmas party and what should you watch out for? The ILM survey found:
• Almost 9 out of 10 workers (87%) have seen colleagues drink too much
• 48% have gone to work with a hangover after their office party
• 28% have heard staff revealing their colleagues’ secrets
More than half the managers surveyed (51%) said they would reprimand workers for being rude to each other, while 28% would tell workers off for revealing their colleagues’ secrets. The ILM survey revealed that managers are indeed human beings, with only 10% of them reprimanding their workers for coming in with a hangover after the Christmas party.
Charles Elvin, chief executive of ILM, commented: “Christmas parties are a great way for companies to show their appreciation to staff for all their hard work during the year, and it can also be a good opportunity for managers to get to know their staff in a more informal setting. However it is important for all to remember that they are still, essentially, in a working environment.”
More than 9 in 10 managers hoped their staff would enjoy themselves, and 24% are keen for staff to let their hair down and have a dance. However, 17% would reprimand staff for drinking too much.
Charles continued: “Fall-out from the festive party can be a worry for managers. It is important that leaders communicate exactly what behaviour will be tolerated and what behaviour will not, and as always, lead by example. You can’t offer a free bar all night, then complain when people drink too much.”
Although nearly 30% of workers thought their bad behaviour at a work Christmas party had had a negative impact on their career, only 3% reported ever being rebuked for their festive antics.
The survey asked managers what was acceptable and unacceptable behaviour at the Christmas office party. Below are the results:
Do:
• Enjoy yourself (94%)
• Get to know people from other areas of the organisation (62%)
• Discuss personal interests (40%)
• Dance (24%)
• Network with senior staff (13%)
Don’t:
• Be rude to your colleagues (51%)
• Shout at the boss (41%)
• Reveal your colleagues’ secrets (28%)
• Drink too much (17%)
• Remove items of clothing (16%)
|
3254438f73fce4761f2e1cada96b8a06 | https://www.forbes.com/sites/karenhigginbottom/2015/10/24/focus-on-short-term-gain-means-next-scandal-waiting-to-happen/ | Focus On Short-Term Gain Means Next Scandal Waiting To Happen | Focus On Short-Term Gain Means Next Scandal Waiting To Happen
The repercussions of the Volkswagen (VW) scandal are still being felt among its various stakeholders such as customers, investors, suppliers and employees. However, new research indicates that nearly a third of business leaders would choose to continue rewarding high-performing individuals regardless of the values they demonstrate, suggesting that the next big corporate scandal is in the pipeline.
Photo by Sean Gallup/Getty Images
The Chartered Institute of Personnel Development (CIPD) survey of 3,500 business leaders and 2,200 HR practitioners around the world explores ethical decision-making in business and the values that influence corporate behavior. The findings indicate that the focus on short-term gain means that the next corporate scandal is waiting to happen. Just under a third of business leaders report they have to compromise their principles to meet current business needs. More disturbingly, less than half of business leaders and HR practitioners in the survey believe their core values cannot be compromised whatever the context. The good news is nine in 10 business leaders claim they would protect the long-term organizational health and reputation, when given a choice of pursing expedient or sustainable decisions.
“The VW scandal is a stark reminder that organizations –particularly large and complex ones- need to think carefully about how they create organizational culture and how they increase the chance that people at all levels of the organization will make ethically sound decisions,” remarked Peter Cheese, chief executive for the CIPD. “Our research suggests that far too many business and HR leaders continue to be focused on the short-term at the expense of the long-term interests of the organization and its people. This risks unintended consequences when people try to cut corners or maximize short-term returns without thinking about the consequences of their actions on all their stakeholders, which includes employees, customers, suppliers and communities and as we’ve seen in the case of VW, the shockwaves are considerable and can significantly damage even the biggest brands.”
Cheese called for more transparency and consistent reporting of the people and organizational elements of business to get more insight on how an organization is working, how it’s looking after its people and some understanding of corporate cultures. “We need to move beyond accounting to accountability. Our research raises questions about the purpose of business and whether treating people fairly should be seen as a means to an end or an end in itself.”
So what’s the role of HR in the aftermath of a corporate scandal? Jo Sweetland, managing partner and practice head for HR at Green Park suggests the following steps:
HR and communications need to act fast and honestly. Internal communication is key with clear and consistent messaging to staff. Run surveys and address their concerns head on and ensure line managers are briefed, up-to-date and ready to answer any questions from their staff honestly. HR need to keep their ear to the ground and gauge how everyone is feeling; stay informed of what is being said by employees. If there has just been mass redundancy; it’s not the ideal time to have a ‘team party’. Where possible, try and create a positive work environment but don’t ignore the situation as gossip. Media will add to an already demotivated workforce causing a huge impact. In some instances, you need to ensure staff are versed on what to say to the public if ever put in that situation. Make staff feel secure. You don’t want to lose your talent because of this situation. Talk to them about their future options and investing in them with more training and career progression. You don’t want to lose them while they are feeling vulnerable and getting approached by other firms.
|
b11b749ea04ab7e865b0b9062f51ecb1 | https://www.forbes.com/sites/karenhigginbottom/2017/11/08/employers-failing-to-deliver-on-digital-skills/ | Employers Failing To Deliver On Digital Skills | Employers Failing To Deliver On Digital Skills
Shutterstock
Firms are failing employees in their desire to upskill and advance their careers especially when it comes to digital skills, according to a report from Capgemini and LinkedIn.
The survey of 753 employees and 501 executives at the director level or above covering nine countries including the U.S. and the U.K. revealed that 29% of employees believe their skill set is redundant now or will be in the next one to two years. The survey found that the digital talent gap is widening with more than half of organizations agreeing that the digital talent gap is hampering their digital transformation programs and that their organization has lost competitive advantage because of a shortage of digital talent. Budgets for training digital talent have remained flat or decreased in more than half of the organizations.
Claudia Crummenerl, head of executive leadership and change at Capgemini argued that the digital talent gap was both a business and HR issue. “Ultimately, the digital talent gap will damage firms’ bottom line if they don’t address it now. If employees don’t feel like they are continuously learning or are worried that their skills are redundant, they will either leave and take their knowledge and experience with them or emotionally resign and only perform their duties.”
The findings are a wake-up call for learning and development professionals, said Crummenerl. “The learning experience needs to change and adapt to digital and be more employee/learner centric. Otherwise, the trend of employees looking for development opportunities themselves will continue as they feel the offers in the company are boring or irrelevant to their work.”
The report identified that people with experience in hard digital skills in areas such as advanced analytics, automation, artificial intelligence and cybersecurity are in high demand. However, soft digital skills such as customer-centricity and a passion for learning are most in demand by organizations. However, the greatest gap in soft digital skills was comfort with ambiguity and collaboration. “To get hard skills up-to-date, it depends on which skill we’re looking at. There is a difference in programming or mobile app development and AI or robotics,” remarked Crummenerl. “Most likely, firms are having to send employees back to school.”
So why aren’t firms focusing on training employees to have ‘soft’ digital skills? Crummenerl believes that firms are overlooking soft skills such as flexibility, ability to learn and encouraging entrepreneurial spirit. “Often companies mistakenly think such skills are less important for innovation as technical talent. Yet having soft skills is crucially important for taking and implementing decisions. Part of the issue is that nobody has yet grasped what digital culture means.”
The survey found that employees felt that organizations’ digital training programs were not hugely effective with more than half of employees saying training programs are not helpful or that they are not given time to attend them. Learning and development professionals need to take a completely different approach than they’ve used before, warned Crummenerl. “HR needs to find out what the pain points are and how employees want and need to learn. The self-understanding of a learning department will change dramatically. It’s not a training provider anymore but it is a curator of knowledge and learning that is delivering learning for their employees in a relevant and digestible manner.”
|
3ef0ed71b28528011e6302f09cade251 | https://www.forbes.com/sites/karenhua/2016/07/19/travel-app-world-pokemon-go/ | The Best 'Pokémon GO' Travel Photos -- From Stonehenge to Niagara Falls | The Best 'Pokémon GO' Travel Photos -- From Stonehenge to Niagara Falls
Pikachu or it didn’t happen.
Gallery: The Best Pokémon GO Travel Photos 28 images View gallery
Now, it’s not enough to just visit the iconic wonders of the world—Pokémon GO goes along, too.
Nintendo’s augmented reality mobile app prompts players to collect among 250 digital monsters at real-life Pokéstops, or checkpoints that range from important landmarks like the Sydney Opera House to obscure nooks of a city. While Pokémon GO is mainly available in North America, Europe, and Australia, the game is catching fire globally—and set to hit Japan tomorrow, July 20.
One the one hand, the will to seek out the most elusive creatures, like Pikachu, galvanizes players to explore parts of the city they wouldn’t normally, perhaps leading them to hidden gems. In fact, many parks, monuments, and restaurants have taken advantage of the extra traffic.
We have two #PokemonGo Poké Stops at the museum! Or should we say Mew-seum? Learn more: https://t.co/niHH6Y9qI9 pic.twitter.com/QFvoIzihBw — Museum of Modern Art (@MuseumModernArt) July 13, 2016
On the other hand, players are so focused on the game that they can’t fully appreciate the sites they discover. If social media hasn’t already drawn attention away from being fully immersed in the moment, then traveling with Pokémon GO certainly has.
The game has some players visiting the zoo more to find the virtual animal exhibit than to see actual animals. At sports games, some players can be found with their eyes glued to their screens instead of the field they paid to see.
National Zoo photo (left) by Ikodl on Reddit. Wrigley Field photo (right) by by 4n0x1_thomas on... [+] imgur.
However, there have been more extreme examples of individuals who have become too engrossed in their menagerie of pocket monsters. Instances of two men falling off a cliff in San Diego, people crashing or being hit by cars while playing Pokémon GO, prove that when it comes to using Pokémon GO as a travel app, choose not to.
Pokémon GO, which was released on July 6, has now become the fastest game to top the App Store charts. Within a week of its release, its 21 million players made it the most active mobile game ever in the United States.
Amid the rising Pokémon craze, channel your inner Ash Ketchum wisely.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.