title
stringlengths 2
283
⌀ | author
stringlengths 4
41
⌀ | year
int64 2.01k
2.02k
| month
int64 1
12
| day
int64 1
31
| content
stringlengths 1
111k
⌀ |
|---|---|---|---|---|---|
Enclave Audio delivers 5.1 surround sound (almost) wirelessly
|
John Biggs
| 2,016
| 5
| 6
|
In the list of things that we want science to solve I suspect the creation of a wireless 5.1 speaker system that is easy to use and set up ranks up there in between a cure for shingles and the colonization of Mars. Thankfully, there are people on the job. has been working on a wireless 5.1 sound system since 2013 and they recently released their first product at CES, the CineHome HD. The kit, which costs $1,100, is an an all-in-one solution with center speaker/receiver, two front speakers, two smaller rear speakers, and a subwoofer. Setup is simple: you connect the center speaker/switcher to the TV via HDMI and then plug in up to three HDMI-compatible devices. You can also plug an optical cable into the back or connect to the system via Bluetooth. You then plug in all of the satellite speakers. Each speaker is fairly small – the rear satellites are about 4 inches tall and unobtrusive while the subwoofer is 17 inches tall. Each speaker connects to the center receiver wirelessly and automatically – there is no pairing process – and in theory the system will stay connected and power on out of standby when you’re ready to watch TV or play a game. That’s basically it. Enclave doesn’t offer much in the way of position tuning beyond a noise generator to test each speaker in turn and the on screen setup is as spartan as the rest of the kit. It took me a about fifteen minutes to set things up in a spacious basement rumpus room and, aside from finding outlets for each power supply attached to each speaker, setup was worry-free. I’m honestly pleased with the CineHome. It is a clever solution to a thorny problem and considering for their solution and Klipsch charges over $5,000 for theirs the price for the Enclave is just right. However there is a fairly annoying problem that might not be an issue for folks with more outlets on their walls. Enclave is using 2×3 inch wall warts for each speaker. This means you can only a fit a few on a power strip and if you have other stuff plugged in you’re out of luck. It’s the age-old dilemma: how can you make truly wireless speakers without wires? Sonos fixes this by putting the power supply into the speaker and offering a smaller cable rather than a wall wart. Enclave doesn’t. Further, the system takes a bit of time to connect and start playing and the on-screen display is a bit laggy. It’s not bad enough to completely discount the system as a whole but it’s still slightly problematic. The remote control isn’t very responsive, either, which makes things seem laggier. It’s not a showstopper, but it’s a something to consider. The home-theatre-in-a-box world is full of contenders vying for the space below your table. You can get really cheap wired systems for a few hundred dollars and you can get something like a Sonos for much, much more. Enclave offers two benefits: the price and the wireless speakers. The speakers themselves are just fine in terms of audio reproduction and the kit supports Dolby DTS 5.1 Digital Surround as well as other digital standards which means you’re going to hear the surround with enough clarity to count. These aren’t audiophile-quality speakers but they will make your movies sound great. [gallery ids="1318073,1318074,1318075,1318076"] In the end a kit like the CineHome is a no-brainer if you’re trying to avoid (too many) wires. There are no truly wireless speakers. You always have to power them. Therefore a solution like Enclave’s is the next best thing and it works well. You’re getting a good, compact system with a nice frequency range and enough separation between front and back to feel like you’re in the middle of the action. The power supply issues notwithstanding Enclave did a good job with this product and they will be expanding their product by allowing you to add extra speakers for 7.1 sound down the line. The CineHome is a good start and a promising product like that could mean the end of wires if not the end of shingles and/or the colonization of Mars. A TV watch can dream, can’t she? $1,100 5.1 wireless speaker system in a box
Black speakers
5-inch tall rear speakers, 8-inch tall front speakers Pros
Easy setup
Inexpensive
True 5.1 sound Cons
Frustratingly large power supply
Limited ports
Slow to “boot” [gallery ids="1316925,1316926,1316927,1316928,1316929"]
|
13 TechCrunch stories you don’t want to miss this week
|
Anna Escher
| 2,016
| 5
| 6
|
TechCrunch is gearing up for the . This week, the battle between Brazilian authorities and WhatsApp continued, Tesla shared details about how effective its bioweapon mode is and Uber got hit with another lawsuit. Here are the stories you don’t want to miss. Facebook’s messaging app . A Brazilian judge ordered telecom providers in the country to block WhatsApp in a dispute over access to encrypted chat records related to a drug investigation. WhatsApp argued that it cannot access the chats in an unencrypted form and therefore cannot provide the required records to the court. Despite the judge’s , the ban was lifted and roughly 24 hours later. Connie Loizos wrote about how to the company. Startup employees have to exercise their options within 90 days of leaving a company or else lose them and, at Uber, that cost is simply too high. Tesla about how effective its particulate filters are. They are so good, not only do they clean up the air inside the car, they make the world the car cleaner, too. This could be the . We see you, Tesla. that pertains to all current and former Uber drivers in the U.S., excluding California and Massachusetts. The suit asks the court to classify Uber divers as employees rather than independent contractors, “recover unpaid overtime wages and compensation,” reimburse expenses, and pay the tips that “were earned but stolen by Uber or were lost” due to Uber’s communications and policies. , a program that allows you to use your Uber-linked credit card at select merchants to get money off rides. Quantum computing is still very much in the early research stage, but by giving researchers access to a 5-qubit quantum computer it’s calling . Australian entrepreneur They say the story does not add up, given that his “proof” could simply be an old signature signed by Satoshi and what was previously known about how the Bitcoin inventor operated in the past. Wright has previously been named as Satoshi but denied it at the time. Josh Constine wrote about , and how spammy chatbots could soon impersonalize the messaging experience. Facebook’s gamble to win the future of chat could create diverging interests from its user base, and we could just end up going back to good old SMS to talk to our friends. contributor Jon Stokes wrote about Last week , giving the database giant a vertical construction solution in the cloud. This week, , giving it a vertical utilities cloud solution. We’re beginning to see a pattern, here. Quizlet, the makers of popular web and mobile study tools and an edtech success story company, , the former VP of product management at YouTube. from a suborbital rocket launch that climbed to 396,405 feet. contributor Matt Heiman wrote that Android commands globally. But you wouldn’t know it here in Silicon Valley — almost everyone has an iPhone. As the consumer technology landscape evolves over the next five years however, there are a number of reasons to believe that We took a closer look at
|
Researchers have developed a flexible holographic smartphone screen that plays a mean game of ‘Angry Birds’
|
Brian Heater
| 2,016
| 5
| 6
|
The first thing you do upon developing a flexible holographic smartphone display? Fire up a game of , naturally. All the rest of that smartphone functionality can wait until you’ve finished a few rounds of slingshotting avian missiles. Maybe it’s not the thing — but it was clearly high on the list of the researchers who developed the HoloFlex. And indeed, the game makes a pretty sizable cameo in the demo video for the new technology, which utilizes motion parallax and stereoscopy to render 3D images without the need for glasses. The technology is built into a Flexible Organic Light Emitting Diode (FOLED) touchscreen that can be bent by the user. In addition to the oft-stated upsides of a bendy smartphone, a built-in bend sensor (similar to what the lab deployed in its recent ReFlex prototype) leverages the motion as another means with which to interact with the handset. The pair of disembodied hands in the demo video use the motion to move objects along the phone’s Z-axis. In the case of , that means stretching back the bird-catapulting slingshot. The tension on the phone correlates with that of the elastic band as it stretches back. As is pretty clear from the video, this is all still very much in its early stages. Most notably, there’s the extremely low-res 160 x 104 resolution (a result of dividing up the full HD display to achieve the desired 3D effect). But the lab has some grand plans, including holographic video conferences. “When bending the display, users literally pop out of the screen and can even look around at each other, with their faces rendered correctly from any angle to any onlooker,” researcher Dr. Roel Vertegaal says in a release announcing the new technology. The lab, naturally, references Princess Leia in the release to really drive the point home. Dr. Vertegaal, you’re our only hope. Internal specs, for those who care about such things on their display prototypes, include a 1.5 GHz processor and 2 GB RAM running Android 5.1. The technology will be on display at the conference in San Jose next week.
|
Musical.ly raising $100 million at $500 million valuation for social music videos
|
Katie Roof
| 2,016
| 5
| 6
|
Headquartered in Shanghai, musical.ly has gained across the globe. The team says it has 60 million users and is . The lip-sync app has similarities to but is more of a social media platform. The company calls it a “video social network.” Users can follow accounts to keep up with their favorite performers. While the business has gained significant traction, its details have largely remained under the radar. Musical.ly and received very little media attention. The investors would not comment on the current fundraising round, but they are especially enthusiastic about musical.ly. Hans Tung, a board member at musical.ly and a partner at GGV Capital, said that he was “ The focus is on music videos right now, but we are hearing that musical.ly has some pending updates that would broaden the app’s appeal to older demographics. The app is free and available on both and .
|
RewardStyle helps influencers make money from social
|
Jordan Crook
| 2,016
| 5
| 6
|
These days, you can’t swing a bag of cats around without hitting some sort of social influencer. But how do these people make money from their content? , a Dallas-based startup, provides a platform for influencers and bloggers to get paid for all the sales they inspire out of consumers. Though the company has been operating under the radar, it has grown to generate more than $1 billion in sales for its 4,000 retailers and 575,000 brands worldwide since launching in 2011. It all started when founder Amber Venz Box, personal shopper and jewelry maker, was running her own style blog. She started seeing loads of sales going through from her content but wasn’t getting any payout from the retailers. Effectively, she had cut herself out of her own business. She decided to build something, with the help of her boyfriend (now husband) Baxter Box, that eventually turned into RewardStyle. Here’s how it works: Bloggers can create clickable links from their content (pictures, text, etc.) that lead directly to retailers and brands. When a reader clicks through and makes a purchase, both the blogger and the retailer can track that purchase and send a commission to the blogger for the lead. For a long while, this system worked out just fine, and RewardStyle continued to grow in both influencers and retailers. Then Instagram came along. As Instagram picked up steam as a major platform for influencers, RewardStyle was left with the challenge of connecting sales from influencers to retailers without the ability to track links — as you know, Instagram doesn’t allow links on pictures. To overcome this obstacle, RewardStyle launched LikeToKnowIt, a platform built specifically for Instagram. Bloggers signed on to the LikeToKnowIt platform have a specific link for their profile (the only link allowed on Instagram) that takes their followers to a unique web page. This page lists each item that appears in every image of the influencer’s Instagram feed, complete with shoppable, trackable links so that the influencer gets paid out. LikeToKnowIt also lets users sign up for a newsletter, which pushes every photo they like on Instagram directly to their inbox. In the past two years since launch, LikeToKnowIt has generated more than $100 million in revenue, with 1.5 million users subscribed to the system and more than 1,000 LTKI posts created every day. With the fashion space relatively conquered, RewardStyle has now launched other verticals, partnering with home decor influencers and retailers like West Elm. If you want to learn more about RewardStyle, you can hit up the website right .
|
Uber appoints former EC VP Neelie Kroes to its public policy board
|
Natasha Lomas
| 2,016
| 5
| 6
|
Taxi app , which continues to battle regulatory clamp downs and legal confrontations in Europe, has appointed a former VP of the European Union’s executive arm, the European Commission, to its as it looks to grease the gearbox of its regional fortunes. The ex-VP in question, former digital agenda commissioner Neelie Kroes — who stepped down from her role in the European Union’s executive body in November 2014, after serving a five-year term in digital policy (and some 10 years in all as a European Commissioner) — was a vocal supporter of Uber during her time in post. Prior to leaving office in 2014, for example, Kroes loudly condemned a ban of Uber in Belgium, that “Uber is 100% welcome in Brussels and everywhere else as far as I am concerned.” So the move hardly comes as a surprise, although for anyone concerned about the revolving door between senior politicians and the private sector, it should raise some serious eyebrows. As notes, the appointment of Kroes by Uber falls outside the EC’s 18-month conflict of interest period, which is intended to throw a wedge in the unsightly revolving door that sees ex-politicians with powerful public sector connections switch to selling their services to the companies that used to lobby them whilst they were in public office. Albeit not a permanent wedge, clearly. Kroes’ appointment is especially interesting given that the EC is currently paying some very specific attention to Uber. Last October, Kroes’ former employer launched — aiming to probe the social, economic and legal consequences of its business operations, with a view to deciding whether new legislation might be needed to properly regulate its business. The study also looks at the impact of similar “transportation network companies” (TNCs), as the EC calls them. One key question is whether the EC will end up deciding TNCs should be considered transportation services, and regulated as such — rather than just as (Uber’s preferred self-categorization) digital platform providers. The EC describes this review as a “first analysis,” setting out to gather data on the various questions and concerns being generated by these types of business models. So there’s no explicit threat to Uber’s modus operandi at a European Union level, as yet. But the Commission does suggest there could be scope for a coordinated response in the future. “European institutions have the competence to bring together the fragmented response to TNCs which is happening at the national level. This could be done through legislation, regulatory actions or the judiciary,” it notes. Evidently Uber will be hoping Kroes’ presence on its public policy board, and the advice she is able to feed them, helps steer away the threat of any new European-wide moves to undermine its regulatory advantages versus traditional taxi firms. In a announcing the full complement of policy board members (eight in all), Uber’s chief advisor and member of the board of directors, David Plouffe, expends a lot of pixels talking up the “societal benefits,” as Uber sees it, of its business, before moving on to the regulatory piece. “Just a few years ago only one place (California) had a regulatory framework for ridesharing. Today more than 70 jurisdictions in the U.S. do, and many other places around the globe are following suit, including in Australia, Canada, India, the Philippines and Mexico,” he enthuses. “As ridesharing continues to grow, we look forward to the Board’s candid advice and insights.” As well as Kroes, Uber’s policy board also includes another former senior politician: Ray LaHood, the former Secretary of the U.S. Department of Transportation. A spokesman for Uber declined to answer a question asking how it responds to criticism of the revolving door between senior politicians and the private sector. TechCrunch understands all board members are being financially recompensed for their policy work for Uber. It’s not Kroes’ only private sector appointment since leaving public office. Back in March it was announced she would be joining . That appointment became effective on May 1.
|
The tech geek’s burden
|
Joel R. Putnam
| 2,016
| 5
| 6
|
The tech geeks are coming to government. Whether they’re in our cities as Code for America volunteers or part of the federal U.S. Digital Service and 18F (hailed by as “ ”), programmers, data scientists and UX designers are starting to find their place in the public sector. This is a good thing. The more accomplished technology professionals devoting their talents to public service, the better. However, there’s a potential problem. Tech professionals in government are in for a serious culture clash. Government writ large has generally been labeled by tech professionals in the private sector as hopeless and its culture, frankly, backward. Any number of online comment threads going back to the launch of (or even ) make this pretty clear. Some programmers publicly say government is so behind the times that they feel taking a job there would atrophy their own skills, almost like a contagion — that working there for more than a year would make it impossible for them to keep up their skill set to the level of the “civilized world” back in Silicon Valley. Even one government engineer otherwise happy with the public sector says that her skills have So this leaves us with an interesting situation. One of technology’s great goals is to help society and the people in it. It turns out there’s another group of people who’ve devoted their entire careers toward that: They’re called public servants. So you’d expect technologists and government officials to have some of the same goals. But the attitude many tech geeks have to government can be patronizing at best — and open contempt at worst. Here’s the surprising thing most people on both sides haven’t noticed yet. While you’d be forgiven for thinking this is an entirely new situation with no precedent thanks to, say, the Internet, smartphones, machine learning or any number of other technological advances, you’d be flat-out wrong. We have seen something a lot like this situation before — and we know how it plays out. We’ve made some serious mistakes from which we have learned, and we can apply the same lessons today. Where have we seen this situation before? In the world of international aid. The parallel isn’t precise, but it exists. In older, less-enlightened international aid scenarios, the professionals who came from rich countries had a very clear preconceived idea of what needed to be solved. They engaged with the poor country’s government and people, spending a minimum of effort trying to understand local history and culture and assuming lots of stereotypes (inefficiency, laziness, a love of red tape, inability to “do” anything), which ended up offending the people they wanted to help and harming much of their efforts to do good. The pattern is so common that entire books have been published on the topic, with titles like “ ” Substitute “tech professionals” for “aid workers” and “government bureaucrats” for “locals” and you have a common attitude many tech professionals take with government. The engineers wondering why their “obvious” solution isn’t being adopted really aren’t that different from the development economists wondering why the people in poor countries aren’t behaving “rationally.” Call it “The Tech Geek’s Burden” if you like. It’s a fast way to alienate many of the career civil servants those in tech want to assist. There is, however, an important and very useful difference: Even if internet professionals have only just started coming to help government, foreign aid has been around at least since the Marshall Plan that helped rebuild Europe in the 1940s and 1950s. While it’s far from perfect, international aid has learned a few things to which tech professionals joining government might want to pay attention. The problems you will be facing on the ground are inevitably more complex than you’ve realized. More importantly, if you drop in and tell people who have committed their lives to serving their community that “they’re doing it wrong” and they all need to stop being dumb and listen to you, you’re either going to get sent home in short order or wind up wishing you had been. Most of what you know about the place you’re going, and its people, has been shaped by media that are in the business of telling exciting and controversial stories, not reporting the often-dull truth. Some of what you’ve heard about inefficiency, graft or laziness relative to your culture may seem like it’s true at first. But if you’re going to get anything done, you’re going to have to take the time to build relationships with the people with whom you’re working to learn about their reality and what they value, especially the things you hadn’t thought of before. You will spot inefficiencies as soon as you start your work. But even if you can change them by the flick of a switch, you will save time and trouble in the long run if you first find out who relies on the system to be the way it is currently and make sure their needs are met. You can find out a lot of this by getting curious about the history of what you’re seeing. . Good programmers know this already, but writing a new application is a easier than fixing a broken application. Similarly, in aid projects built using foreign expertise and parts, once the engineers leave and the system breaks, the community can be left without not only the new system, but also the old one they abandoned in place of the new system. In other words, if you create something new to replace something old, if and when it breaks, you can leave people worse off than when you arrived — unless you’ve made sure that someone there can fix any problems that arise. Failure often goes unreported, especially in places that rely on success stories to get funded. Engineers Without Borders’ David Damberger gave about a water pump system he helped install in Malawi. When they came back a year later they found that not only had it broken, but they discovered something they hadn’t noticed the first time: a bunch of other broken water pump systems that other groups had installed years before. If you’re going to make something, ask around to make sure the broken remnants of someone else’s attempts aren’t lying around, and, if they are, try to see what you can learn from their mistakes. In short, if you are coming from a tech background and using your talents in public service, you can improve the lives of huge numbers of citizens. But you’re going to be a lot better at it if you recognize that you’re coming into a place with its own culture and history that does not need to be replaced by the one you’re used to. Take the time to work with the people you meet and understand where they’re coming from and your work will be far more efficient, effective and durable in the long run.
|
To the next POTUS: For communities of color, encryption is a civil right
|
Steven Renderos
| 2,016
| 5
| 6
|
From San Bernardino to Brooklyn to Capitol Hill, the fight over encryption has boiled over this past month. But who’s paying attention? The basic security feature all of us depend on daily to protect our online payment transactions, message content and other personal data is increasingly taken for granted. And for many Americans, when the debate shifts to government surveillance it’s met with a shrug and a retort: “I have nothing to hide.” Communities of color don’t have the luxury of such complacency. For too long, unwarranted and unconstitutional surveillance has targeted the most vulnerable Americans — people of color, immigrants, welfare recipients and political activists who challenge the status quo. In a political moment of heightened xenophobia, profiling and over-policing, encryption has become a key civil rights protection for targeted communities. As some presidential candidates clamor to portray themselves as advocates for underrepresented communities, we are still waiting for a candidate to emerge as a true champion for civil rights — and thus, for privacy and encryption. For activists and people of color, strong encryption is essential. “We know that lawful democratic activism is being monitored illegally without a warrant,” Malkia Cyril, executive director of the Center for Media Justice, . “In response, we are using encrypted technologies so that we can exercise our democratic First and Fourth Amendment rights.” But these vital tools are under attack. From the well-publicized Apple-FBI fight to proposed state and federal anti-encryption legislation, we are in the midst of an unprecedented assault on our right to privacy. Too often, those opposing encryption on the grounds of national security are all too quick to exclude from the body politic they seek to protect the privacy, safety and security concerns of people of color and other marginalized communities. Not to mention that decryption mandates have been widely lampooned by tech experts for opening the door to third-party hackers across the globe. Meaning that weakened encryption capabilities puts everyone at risk. The promise of security for “average Americans” at the expense of the rights of communities of color has a long history. For centuries, terms such as “national security” and “law and order” have hinged on the policing, surveillance and exclusion of communities of color — from the era of slave branding and patrols, to the internment of Japanese Americans during World War II, to the 1960s surveillance and intimidation of civil rights and Black liberation leaders like Martin Luther King, Jr. But one need not look to the history books to see how unwarranted government surveillance disproportionately targets communities of color. Armed with 21st century technologies, government and law enforcement agencies’ surveillance capabilities are more vast than ever before. Today, federal agencies are using these advanced capabilities to monitor and track Black Lives Matter activists as they exercise their First Amendment rights to gather and protest the systemic violence disproportionately affecting their communities. The FBI continues to pursue explicit programs of profiling and infiltration of Muslim American communities. And Immigration Customs Enforcement has been known to use stingray devices to glean identifying information in immigrant populations. Encryption provides a crucial check on the government’s ability to invade — and criminalize — the private and political lives of marginalized communities. Although the times and the methods have changed, the impact on these communities remains the same, which is why national leadership on encryption is critical. Reactionary anti-encryption measures like the proposed Feinstein-Burr bill show of legislation based in fear rather than facts. But there are some on the Hill approaching encryption with a nuanced understanding both of the technology and the implications for our civil rights. For instance, Rep. Ted Lieu (D-CA) has worked across party lines to develop and introduce the , which would prohibit state governments from making laws to mandate backdoors or otherwise weaken encryption. The White House, and those vying to next occupy it, would be wise to take a cue from Rep. Lieu. Facing an unprecedented scale of targeted surveillance, the right to encryption is more crucial than ever for communities of color. The Obama Administration has remained tight-lipped regarding its stance on encryption. But those candidates courting our votes should be clear: We need our next president to do more — to shape encryption policies based not on fear and misinformation, but on the realities of our technological and political climate, understanding what’s at stake for all of our communities’ ability to live and thrive in safety.
|
Google connects BigQuery to Google Drive and Sheets
|
Frederic Lardinois
| 2,016
| 5
| 6
|
Google today that it is bringing some of its Google Cloud Platform and Google Apps tools a little bit closer together. BigQuery, Google’s analytics data warehousing service, will now be able to read files from and access spreadsheets from . There has long been something of a firewall between Google’s cloud computing services and its more consumer/enterprise-centric Google Apps productivity suite. As a Google spokesperson told me, though, the company is now moving to find better ways to integrate its services and create more unified solutions that bring together tools like Google Apps and Google Cloud Platform. “As Diane Greene has mentioned several times, customers use multiple Google<> products and we can provide the best experience by working across our enterprise teams to create unified solutions,” the spokesperson told us. “Specifically, this integration lowers barriers to adoption by simplifying data workloads and creating a new pathway for enterprise customers to easily use Google Cloud Platform and Google Apps.” So here is what you can do now: Google will now allow BigQuery users to export results right to Google Sheets, its Excel competitor. In addition, BigQuery will also now be able to directly access files from Google Drive for analysis without having to first load them into BigQuery and the service can also directly query Google Sheets spreadsheets now as you edit them. Google Drive can store files . BigQuery can easily handle significantly bigger databases but chances are most people who will want to use it through Google Drive will be working with significantly smaller files. While BigQuery can once you start looking at very large databases, the first terabyte of data processes each month is free, so if you want to give it a try with a smaller dataset and have a few large spreadsheets in Google Drive, it’s worth a try.
|
The TechCrunch Meetup + Pitch-Off is coming to Stockholm just in time for the summer
|
Romain Dillet
| 2,016
| 5
| 6
|
Hello startup friends! The TechCrunch team had a great idea — the summer is right around the corner, so we’re going to celebrate Nordic startups with for the first time ever on June 7. “What a great idea, how do I get involved?” you’re probably thinking right now. So here’s what’s up. As the name suggest, the TechCrunch Meetup + Pitch-Off is a double feature. We’re all getting together to have a drink, meet interesting people, talk about the startup ecosystem and the meaning of life. In addition to this exciting networking part, we’re hosting a pitch-off. Eight to ten startups will have exactly two minutes to pitch their product to a panel of local VC judges and TechCrunch editors. At the end of the night, we’ll crown the winner of the Stockholm Pitch-Off. But a competition without prizes is no fun, right? The winner will get a booth in Startup Alley at TechCrunch Disrupt in London. Second and third places will respectively get two tickets and one ticket for the big show. So why don’t you apply for a chance to participate in the pitch-off? and submit your beautiful startup. If you want to attend the event, you can . Date: June 7
Time: 6-10pm
Venue: The Brewery, Stockholm
|
The dehumanization of Facebook Messenger
|
Josh Constine
| 2,016
| 5
| 6
|
every message. Not any more. When Messenger buzzes, now I don’t know if it will be a friend or a bot. Every chime forces me to do a little Turing test in my head. Was I expecting to be pinged by a pal? Or is it 8:11pm again and TechCrunch’s bot is sending me another daily digest. This is how we ended up hating email. What started as a way for colleagues to exchange important academic research became a constant barrage of newsletters, receipts, and personal-sounding pleas for help…sent to thousands or millions of subscribers. Daily Digests from bots inject noise into Messenger I think it’s no coincidence that Facebook waited . Even just a year ago it was barely over half its current size, and far from being an institutionalized communication utility. The messaging market was heavily fractured and SMS was a more popular fall back. Now with Messenger , Facebook has a little leeway. It’s by far the biggest mobile messaging app in the western world, and it’s not worried about the International market since it owns WhatsApp, the reigning champ most everywhere else but China. Facebook is making a calculated bet that it can get businesses on board Messenger without actual people jumping ship. Head of Messenger David Marcus told me a month ago that the app “can become the main, central hub for all of your communication and interactions with all sorts of different services and business, and it will remain forever a people-centric thing.” If Messenger succeeds, it could host customer service, ecommerce follow-ups, news content, and marketing beyond friend-to-friend communication. It has the opportunity to steal use cases from phones, email, RSS feeds, and websites. Many of these experiences might be better on Messenger. Touch-tone phone menus and hold times are inconvenient and annoying. Messenger could make contacting your airline or local business easier and asynchronous, with the pace controlled by the customer. Instead of a half-dozen separate emails, you could get your purchase receipts and shipping notifications in a single Messenger thread. And with some artificial intelligence, skillful design, and practice, bots could create personalized conversational interfaces we can’t dream of yet. But the risk is high. Spam is one thing in email, which you might check at your leisure. It’s magnitudes worse when you’re getting push notifications on your phone. Anyone who’s ever had to text “STOP” to mute an SMS marketer knows how annoying businesses can get when they contact you directly. Telemarketing is still the bane of families trying to have a peaceful dinner. Facebook Game span was ruining the News Feed. [Image via ] People embraced connecting with business in Facebook’s News Feed. But again, they weren’t able to alert you and the quality of your feed was protected by the ranking algorithm, hiding things you never engage with. Spam almost killed the News Feed in 2010. Companies like Zynga incentivized users to let social games pester their friends. Eventually that “ The result was a bad user experience, followed by a bad developer experience. Users became reluctant to play games or post about them. Developers felt the platform whiplash, watched traffic tumble, and companies like Zynga shrank to a fraction of their former popularity and value. The worry is that Facebook is ploughing into the same situation with Messenger chatbots. Give a bot permission, and it can ping you every day. It doesn’t help that . CNN’s bot couldn’t understand a request for “U.S.” news beyond sending articles with “U.S.” in the headline. Spring’s commerce bot pushed users to buy things higher than their stated price limit. And Poncho. Ugh. Don’t get me started on Poncho the weather cat bot that can’t parse the simplest of weather questions. Poncho, the extremely annoying weather cat chatbot Meanwhile, Messenger is also testing , which will let businesses pay to ping you, as long as you’ve chatted with them before. Facebook’s already raking in , yet it’s playing with fire in the form of Messenger ads. The result is the potential for amongst the signal in Messenger. Too much robotic static, and users might slip to other chat apps, or be less likely to open Messenger and respond to friends. When I asked Marcus about this problem at F8, he told me that “When you think about the first line of defense for bad experiences, we have the ability to control the number and quality of messages that are sent to you, which is not the case with email.” So at least Facebook can pull the plug on spammers if necessary. Users can also easily block businesses. Right now, though, Facebook’s is actively promoting bots, like one for the . Its gamble to win the future of chat could create diverging interests from its user base. A character from the Call Of Duty video game has its own Messenger bot “What you’ll see is we’ve been very thoughtful about this…not all messages from businesses notify your phone but they might bump your thread [to the top of the list].” Facebook will need to aggressively monitor spam and engagement levels, and limit bots that are more annoying or valuable. If I subscribe to a regularly scheduled message from a bot like a news digest but don’t open it for days or weeks straight, Messenger should mute its notifications or otherwise limit the bot’s ability to contact me. The News Feed adapts to my implicit preferences, and so should Messenger. Facebook should also provide developers and brands with better chatbot analytics so they can self-police and modify their own strategies to reduce spam. Still, it’s a broken window problem. Like an abandoned house in a bad neighborhood, once everyone sees broken windows don’t get fixed, suddenly more windows get broken while trash and graffiti proliferate. Once I’m willing to leave Messenger threads unread because they’re from bots, and that notification counter jewel on the app’s home page icon never goes away, I start ignoring friends too. That leads them to try texting me instead, because while SMS might lack Messenger’s fresh features, they know they won’t be drowned out by spam. If Facebook doesn’t nip this in the bud, it might end up having to walk back parts of the bot platform down the line to keep us from disengaging. Marcus insists “Any interruption to your daily life needs to be high value, and it’s not trivial to do it at this scale and do it well, but we think we have the first answers…to preserve the integrity of the platform.” Facebook’s rule over the future of communication depends on it.
|
Want more screen space on your smartwatch? Put a ring on it…
|
Natasha Lomas
| 2,016
| 5
| 6
|
[youtube=https://www.youtube.com/watch?v=9hu8MNuvCHE&w=640&h=360] Can two wearables be better than one? Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) will be presenting some new next week in which they demonstrate a technique aimed at extending the available screen space for smartwatch wearers by adding an additional wearable into the mix: a ring. Their smartwatch-ring wearable combo offers what they describe as “a novel sensing approach” — leveraging the conductivity of human skin in order to track 2D finger touch co-ordinates in real-time and turn the area of the body directly around the smartwatch into an interactive, touch-tracking surface. The idea being that wearables like smartwatches can feel cramped and fiddly to interact with, given how small their screens need to be to fit on the average human wrist. The ring plus smartwatch approach would allow a watch wearer to, for example, run a finger down the back of their hand and then swipe right to scroll through an on-screen list and select an item. Or trace a letter on the back of their hand to shortcut to a particular app or silence an incoming call. Offloading some of the necessary on-screen taps and swipes onto the surrounding skin also frees the screen from being blocked by fingers — potentially making for a superior smartwatch app experience, such as for games (the researchers use the example of playing Angry Birds in the above demo video). They say their technique works through clothes, and is unaffected by different lighting conditions. In terms of accuracy results from the study apparently demonstrated “high reliability and accuracy with a mean distance error of 7.6mm”. Gestures can also be transmitted contactlessly via the technique, with a finger hovering up to an inch above the surface of the skin, thanks to a partial transmission of an electrical signal owing to the human arm acting like an antenna, says CMU’s Chris Harrison, one of the scientists involved in the research. The two components powering CMU’s prototype wearable are a battery-powered ring that continuously emits an 80MHz, 1.2Vpp AC signal into the finger on which it is worn; and a smartwatch wristband instrumented with a structured electrode pattern. “When the user’s finger touches the skin, the electrical signal propagates into the arm tissue and radiates outwards… The signal takes time to propagate, which means electrodes located at different places around the wrist will observe characteristic phase shifts. By measuring these phase differences across several electrode pairs, SkinTrack can compute the location of the signal source (i.e., the finger), enabling real-time touch tracking on the skin,” they explain. A from HCII, back in 2014, also focused on trying to extend the surface area of a smartwatch — in that instance by using proximity sensors and projectors to display colored light buttons on the skin around the watch which could be interacted with without needing to touch the device itself. The latest prototype is arguably a less cumbersome approach to trying to extend the interactive surface area for smartwatch wearers, with the researchers noting that it requires “no direct instrumentation of the touch area (i.e., a skin overlay)”. Although it does of course require the wearer to also charge, wear and not lose a ring. And while we’ve seen a few attempts to fire up a market for supplementary smart rings in recent times, either as or as , it’s fair to say that mass consumer adoption has not yet taken place. Problem is chunky rings are, well, a matter of taste. And easily dislodged/misplaced. Still, the CMU researchers reckon they are onto something by combining rings and smartwatches. “As our approach is compact, non-invasive, low-cost and low-powered, we envision the technology being integrated into future smartwatches, supporting rich touch interactions beyond the confines of the small touchscreen,” they write. Harrison reckons commercialization of the technology could be possible within two to three years — assuming a smartwatch maker decides consumers can be convinced to buy into the benefit of a dual wearable. (They would also, presumably, need to fire up developers to code additional gesture controls to their smartwatch apps to take advantage of a more expansive ‘skinterface’.) “We need to improve the stability and accuracy but its the first of it kind, so we are really excited by the potential,” Harrison tells TechCrunch. He adds there are no trade-offs with this technique, in size terms, when it comes to the smartwatch wristband. “It could be made very small, such that it fits into existing smartwatch form factors,” he says. The biggest downside of the approach remains the need to wear (and not lose) an additional wearable (the ring). So how small could this be? The current CMU prototype is clearly pretty cumbersome. But Harrison says the team believes it can shrink this in size by incorporating an accelerometer to maximize battery life. “Our ring’s current battery is small, but still a bit awkward. We used this bigger battery to simplify our prototyping — it lasts 15 hours on a single charge when continuously transmitting. However, we are planning for some simple tricks that could dramatically extend that,” he says. “For example, it is easy to increate a small accelerometer that can detect when the finger has touched something, and we can turn on the signal emission only then. That would easily bring battery life to several days or even a week, or allow us to use a much smaller battery.” The CMU research will be presented next
|
Announcing the Disrupt NY Hackathon judges and API workshops
|
Matt Burns
| 2,016
| 5
| 6
|
The TechCrunch Disrupt NY Hackathon kicks off tomorrow at the Brooklyn Cruise Terminal, and we’re honored to welcome the hackathon’s stellar judges. Nick Chi rls is a partner at , a pre-seed fund based in Brooklyn that focuses on partnering with technical founders at the infancy of an idea. He previously led the seed investing business at betaworks. He likes Brooklyn based pizza. Kathryn Finney is the founder and Managing Director of (DID), which invests in the success of Black and Latina women founders by providing them with the network, coaching, and funding to build, scale, and exit their high-growth companies. DID runs the BIG Innovation Center, home to the BIG Accelerator, a 16-week program for high potential startups led by Black and Latina Founders. She is also a General Partner in the Harriet Fund, the first pre-seed venture fund investing in high-potential Black and Latina women-led startups. One of the first social media “stars,” Kathryn sold her site, The Budget Fashionista, in 2014. She was the editor-at-large at BlogHer, a platform representing 40 million-plus women influencers. An honors graduate of Yale University and Rutgers University and Eisenhower Fellow, Kathryn received the Champion of Change Award in 2013 from the White House for tech inclusion. She’s also listed in Marie Claire’s 10 Women to Watch in 2016, Entrepreneurs Magazine’s “Woman to Watch in 2016”, New York Business Journal’s Women of Influence Award, SXSW Black Innovator and more. She’s inducted into Spelman College’s “Game Changers Academy.” On February 26, 2015 she was honored by Manhattan Borough President Gale Brewer with the “Kathryn Finney Appreciation Day.” Trained as an architect with a decade-plus of experience in planning and design, Jean founded , the first-of-its-kind residential renovation marketplace, matching homeowners with local design and construction experts based on the budget, location, and style of each project. Sweeten has raised $4.3 million in funding, tripled in size in 2015, with more than $300 million in renovation deals posted. During her time as the Senior Manager of Global Architecture at Coach Inc., Jean built and managed web platforms for the design and construction of Coach stores internationally and received the company’s Chairman’s Award for her work. Jean graduated from The Cooper Union with a Bachelor of Architecture and was selected as one of 9 recipients of the 2011 Loeb Fellowship at Harvard University’s Graduate School of Design. Jean also co-founded and chairs the African American Student Union (AASU) at Harvard’s Graduate School of Design, dedicated to supporting African Americans in architecture, real estate, and urban planning. Charlie O’Donnell is the sole Partner and Founder at . The fund makes seed and pre-seed investments and was the first venture firm located in Brooklyn where he was born and raised. Brooklyn Bridge invested in the first rounds of Canary, Orchard Platform, Tinybop, Hungryroot, Clubhouse, Ringly, and goTenna among others. He has also funded two companies, GroupMe and Docracy, out of the TechCrunch Disrupt Hackathon. Previously, Charlie worked on the investment teams of Union Square Ventures and First Round Capital. He bikes to work, in Gowanus, has done six triathlons, four marathons, and runs the kayaking program in Brooklyn Bridge Park. The longest he has consecutively been outside of the five boroughs of New York City is three weeks. Saron Yitbarek is the founder of , the most supportive community of programmers and people learning to code. She’s also host of the weekly CodeNewbie Podcast and a program manager at Microsoft for Tech Jobs Academy, a technical training program for talented New Yorkers ready to launch their tech career. You can follow her @saronyitbarek. Want to learn a bit more about the tools available at the hack? Take a deep dive into a few APIs in a classroom-style setting. Want your app to access user files, contacts, calendar or mail data? How about build a solution that extends Outlook, Word, Excel or Powerpoint with your own UX and code? Or connect 3rd party apps & services like Trello, Twitter, Github into the conversations of Outlook users and groups? The Office developer platform can make this happen for you. Come learn more about Office 365 APIs, Add-ins and Connectors – we’ll show you how they work and provide pointers to help you get started quickly. Learn more @ dev.office.com, dev.outlook.com Twilio makes it easy to connect the people you care about in the languages and frameworks you already know. Come learn how to quickly build apps with Twilio SMS, Voice, and Video. 12:30pm – Registration opens (come fed or bring a brown bag lunch, beverages served) 1:30pm – Hacking Kickoff and Opening Announcements 2:00pm – API Workshop: Microsoft 2:30pm – API Workshop: Twilio 7:00pm – Dinner Midnight – Pizza and beer, courtesy of 7:00am – Breakfast served 9:30am – Hacking concludes and hacks submitted to wiki 10:00am – General public welcome to enter to attend hackathon presentations 11:00am – Hackathon presentations begin 2:00pm* – TechCrunch and Sponsor awards presented *Final awards may be held earlier or later depending on duration of hack presentations.
|
Crunch Report | Twitter Changes 140-Character Limit
|
Khaled "Tito" Hamze
| 2,016
| 5
| 24
|
Tito Hamze, Jason Kopek
Tito Hamze
Yashad Kulkarni
Joe Zolnoski
|
UberEATS launches in Singapore, its first Asian city
|
Catherine Shu
| 2,016
| 5
| 24
|
, the ride-hailing app’s food delivery service, today in its first Asian market. Singaporeans can now download the standalone app and order food from about 100 restaurants. Deliveries are limited to Singapore’s Central Area, its main business and commerce hub, but the company said it will expand its service coverage and menu. Uber in Singapore last month. The app at the beginning of this year before expanding to . Singapore was also the first Asian market to . The city-state is tiny, with a population of just 5.4 million, but it was chosen by Uber because it is one of Asia’s top technology hubs and . The question for UberEATS, however, is whether or not curious users will want to stick with it after the novelty wears off. There are already fairly well-entrenched food delivery services in Singapore, like Rocket Internet-backed and , whose investors include Accel and DST Global. UberEATS’s advantages include the ability to promote itself to people who already have an Uber account and the fact that it relies on the same map-routing algorithms Uber uses to connect drivers and passengers as quickly as possible to avoid canceled rides (UberEATS Singapore promises delivery within 35 minutes). Its tech infrastructure also may help it achieve higher margins than other food delivery companies, which to balance profit with the high cost of providing on-demand deliveries. Uber, which has , has said that it plans to become a , providing quick deliveries of food and packages instead of just rides. UberEATS’ is separate from ride-sharing (though drivers can do both) and one of the attractions for potential contractors is that delivering food allows them to find work when demand for rides is usually slow. TechCrunch has contacted Uber to ask if UberEATS operates the same way in Singapore.
|
India flight tests new reusable space plane
|
Emily Calandrelli
| 2,016
| 5
| 24
|
India has successfully completed their first small step toward joining the other players in the market of reusable spacecraft. On Monday, the (India’s version of NASA) launched a 22-foot winged spacecraft to an altitude of 65 kilometers (about 40 miles) and navigated the vehicle back down into the Bay of Bengal, east of India. The entire mission lasted less than 13 minutes and didn’t travel high enough to reach space, but it was an important step for the Indian space agency on the path to making launches more affordable. [gallery size="tc-article-featured-image-wide" ids="1327438,1327439,1327440,1327441,1327442,1327443"] Their experimental vehicle, known as the Reusable Launch Vehicle-Technology Demonstrator (RLV-TD) reached Mach 5, survived high re-entry temperatures, and was used to test critical technologies such as autonomous navigation, guidance and control, and a reusable thermal protection system. Approved for development in 2012, ISRO has invested the equivalent of $14 million into RLV-TD, to the BBC. RLV-TD will undergo four experimental flights, the first of which was completed on Monday: hypersonic flight experiment (HEX) followed by the landing experiment (LEX), return flight experiment (REX) and scramjet propulsion experiment (SPEX). ISRO is still many years away from a commercially available version of RLV-TD, but the fact that they’re joining the likes of Blue Origin, SpaceX, Virgin Galactic, and XCOR in an effort to develop a reusable vehicle is a sign that the entire industry is shifting away from traditional expendable designs. While the RLV-TD may look like a mini-Space Shuttle because of its winged-body design, its size alone is an indicator that the RLV-TD program has a long way to go. At just 22 feet in length, the RLV-TD pales in comparison to NASA’s massive, human-rated 122-foot Space Shuttle Orbiter which brought astronauts into Low Earth Orbit for 30 years. Regardless, this week’s successful mission marks an exciting milestone for India, one of the few countries that allocates significant resources to space exploration activities. Compared to the annual budgets of other leading space-faring nations (NASA’s $18.5 , Europe’s $6 , and Russia’s $5 ) India’s annual of $1.2 billion may not seem like a lot, but ISRO has been an important player in the launch industry for years. India’s workhorse launch vehicle, the Polar Satellite Launch Vehicle (PSLV) has been used to bring small satellites into orbit for over two decades. In fact, since its first successful launch in 1994, PSLV has satellites from 20 different countries, including the U.S. It may be a while before India has a commercially available reusable vehicle, but with the RLV-TD program underway, it seems the country is dedicated to bringing launch costs down and maintaining a competitive edge in the small satellite launch industry.
|
The reality of augmented and virtual reality venture capital
|
Tim Merel
| 2,016
| 5
| 24
|
AR/VR is the new hotness. VCs invested 2016, $1.2 billion of that in the first quarter of this year alone. There are four AR/VR unicorns already ( , , , ). Big numbers, very exciting. But as Partner Bubba Murarka says, “not all VCs have taken the red pill yet.” principal Kobie Fuller sees AR/VR as “the next phase of computing, where it’s not a question of if, but when.” General Partner Jason Krikorian believes AR/VR can enable “new experiences not possible before.” Managing Director Tim Chang describes AR/VR as spawning “the next evolution of human story telling at a whole new level — this is what it’s like to be me.” Managing Director Jason Ball considers that “AR/VR is the new UX/UI for everything, but it will take time.” General Partner Bill Malloy says that “despite it not being fully understood yet, AR/VR could surpass TV and PC in the long term.” CEO Shintaro Yamakami thinks AR/VR could become “the bedrock of almost every industry,” while Managing Director Dovey Wan sees AR/VR as “increasing information density and decreasing distance.” So does this mean everyone’s on board? Chatting with VC and corporate friends in Sand Hill Road, Hollywood, London, China, Japan and South Korea, a taxonomy emerged for AR/VR investors: Qualcomm’s Ball thinks this will change in the next two-three years, “The ‘Maybe’ crowd will become the ‘I Told You So’ gang, the ‘No’ folks will turn into the ‘Oops, I missed it again’ bunch.” Having backed two AR unicorns, Qualcomm has a distinctive point of view. VR will be big, AR will be bigger (and take longer). But as in all early-stage market growth, it’s curved, not straight. So while there is for AR/VR, there could be just a few billion dollars of revenue this year, a progressive ramp in 2017 and a hoped-for inflection point in 2018 . Managing Director Michael Yang doesn’t see uncertainty about market timing as a barrier. “The big guys like Google, Facebook, Microsoft, Samsung and Sony entering gives confidence about the market, and that talent will flow in. That makes people and deal flow possible.” But General Partner Marco DeMiroz thinks that “startups in this space need to raise 18 to 24 months of financial runway until the mass market arrives.” Where Managing Partner Jim Robinson wisely chose to not invest in 1990s VR, this time he’s in the market. “The technology is mature enough that it generates a genuine physiological response, and the middle layer of getting humans into VR is why we invested in . They’re creative and technical, not overhyped, and smart about managing their cash during these early years.” partner and D.A. Wallach jumped into the market six years ago with DAQRI, and VR more recently with 8i. “Users will decide what content does and doesn’t work in the next few years, so fundamental technologies enabling compelling emotional experiences are really interesting at this stage of the market.” For those in the industry, AR/VR is all-encompassing. But VCs see things differently. Managing Director Jay Eum sees AR/VR as a vertical slice across larger horizontal sectors. “Dive in to a specific sector and find a problem that can be solved efficiently or an experience that can be enhanced through AR/VR, then you’ve got something valuable. That’s why we invested in .” Comcast’s Yang thinks similarly. “ was compelling for us because they know how to operate in their industry beyond their VR chops. You need the knowledge, relationships and go-to-market to make it work.” Combined with market timing questions, Accel’s Fuller is looking for “practical business models now, that will also scale in the long term.” Managing Director Matt Turck invested in because it is “strategically positioned at the intersection of 3D production tools (Photoshop, Sketchup, TiltBrush, RealSense, Tango) and 3D consumption platforms (VR, AR, web, mobile, 3D printing). So regardless of which technology wins, they will be part of the ecosystem.” General Partner Freddie Martignetti comes back to, “investing in outstanding teams, who’ll succeed regardless of market timing.” In other words, AR/VR startups need to be great at something industry-specific where they can make money for the next two years, as well as having the ambition for dominance as the market scales. Just being good at AR/VR without something broader to sustain you isn’t enough for VCs. Not all VCs are the same, and they don’t all think alike. You need to do your homework on them as individuals if you want a shot at investment. Accel’s Fuller sees opportunities in “improving existing user behavior, immersive experience sharing as the next evolution of media, and mass AR/VR user-generated content. These will allow AR/VR platforms to stand on their own with habitual usage and longevity.” partner James Wise is interested in “backend infrastructure to capture, process and create AR/VR, AR/VR apps broadly, and the middle layer that makes it all possible. AR/VR needs to become radically simpler, so everyone can use and create.” DCM’s Krikorian believes the market needs to enable users “to do stuff, not just watch stuff. It’s not only about lean-back entertainment, and we need to see real people having real interactions to build real relationships. For that to happen, the friction needs to be taken out of the system the same way we’ve seen in mobile.” Head of Business Development and head Michael Yanover is interested in investing in “content platforms, not content pure plays, where CAA can add value and connectivity for consumer driven businesses.” So where has the money been going? The leading AR/VR investment sectors in the 12 months leading to Q1 2016 were AR/VR hardware, video, solutions/services, games, advertising/marketing, consumer apps, distribution, tech and peripherals. While took almost half of the total, the remaining sectors still raised . Early-stage investment follows a predictable series: Seed, Series A, B, C, etc. Some funds are series-agnostic, others are more specific (e.g. ). But early-stage markets are unusual, particularly from Series B onwards. AR/VR is no different. For the seed stage, the market is narrow but deep, with dedicated and focused AR/VR funds, incubators and angels who often work together. These include , Colopl Next, The VR Fund, , , , , , VR Capital, , and others, plus the Sand Hill Road VCs who are happy to start at the seed stage and follow on. For Series A, the market becomes broader and shallower. Some of the AR/VR seed investors have the financial capacity to follow on, but others don’t. Here, tracks more than 200 global VCs and corporates who have invested in AR/VR, and there are many other VCs who are actively exploring Series A investment. For Series B onwards, the market changes dramatically. Series B usually requires traction in the form of revenue, so it becomes much harder for VC firms to invest in AR/VR because of the early stage of the market. This is where the VCs on Sand Hill Road are guiding portfolio companies by leveraging relationships with strategic corporate investors and international investors from China, Japan, South Korea and elsewhere. Home Entertainment President Mike Dunn makes a pretty clear case for why they invested in . “We know it’s going to be a big part of our industry’s future, and opportunities to invest in entrepreneurs of that quality don’t come along every day.” Another aspect for AR/VR entrepreneurs to consider is fund vintage. AITV’s Malloy puts it succinctly. “If your fund is in ‘harvest mode’ late in the lifecycle, AR/VR is hard to invest in because of time horizon. If you have a relatively new fund, then you have the flexibility to see what happens with market timing without getting squeezed.” AR/VR is cool, but that cool factor is beginning to wear a bit thin with VCs. Managing Director Jesse Devitt calls it like he sees it. “Stop showing me dragons. Sure, you can make a beautiful dragon in VR. And yes, it’s cool the first time you see it. By the twentieth dragon it gets pretty old. Show me something useful in AR/VR. That’s a lot cooler.” In other words, just because you can make something visually stunning in AR/VR doesn’t mean you’re going to raise money. If there isn’t something compelling from a user’s perspective (whether consumer or enterprise), and what you’re doing isn’t easier or better in AR/VR than on a smartphone, tablet or PC, why are you showing it to a VC?
|
E Ink brings rich color to ePaper, but not to e-readers
|
Devin Coldewey
| 2,016
| 5
| 24
|
E Ink, maker of the ePaper displays found in many e-readers (maddening to have three different e prefixes in one sentence, but it’s unavoidable), announced a brand new type of reflective display that can show a huge range of colors — but the tech is only going to be deployed as signage for now. Color reflective displays are nothing new, but none of the technologies touted over the years have been more than adequate. In person, color e-readers always seemed washed out, which is not good when your competition is glossy magazines and kids’ books. E Ink’s produces 32,000 colors, and unlike some other electrophoretic displays, each pixel contains all the pigments necessary to make every color. That’s a major engineering challenge, much more so than a monochrome display. “Many materials and waveform inventions were required to independently control the position of the multiple color pigments,” read E Ink’s press release. Diagram showing how the tiny colored pigments are wrangled to produce various hues. That improves resolution, contrast and general display quality — but right now the only panels E Ink has made are 20 inches diagonally and 2500×1600 pixels. The colors are still muted, too, as you can see in . Great for signage in stores, but at 150 pixels per inch, it wouldn’t stand up to close inspection — say, as an e-reader. That said, early e-readers weren’t so hot resolution-wise or in terms of contrast, and they’ve come a great distance since. This is the first generation of the ACeP technology, and it’s the first color electrophoretic display with real promise. An E Ink representative indicated that it’s still in the R&D phase, and should be ready to manufacture within two years. ACeP and plenty other fancy display solutions are currently being exhibited at in San Francisco.
|
Nanomaterials could double efficiency of solar cells by converting waste heat into usable energy
|
Devin Coldewey
| 2,016
| 5
| 24
|
An experimental solar cell created by MIT researchers could massively increase the amount of power generated by a given area of panels, while simultaneously reducing the amount of waste heat. Even better, it sounds super cool when scientists talk about it: “with our own unoptimized geometry, we in fact could break the Shockley-Queisser limit.” The Shockley-Queisser limit, which is definitely not made up, is the theoretical maximum efficiency of a solar cell, and it’s somewhere around 32 percent for the most common silicon-based ones. You can get around this by various tricks like stacking cells, but the better option, according to David Bierman, a doctoral student on the team (and who is quoted above), will be photovoltaics — whereby sunlight is turned into heat and then re-emitted as light better suited for the cell to absorb. Sound weird? Here’s the thing. Solar cells work best with a certain wavelength of light — perhaps ultraviolet is too short, while infrared is too long, but let’s say 600nm (orange visible light) is perfect. Only some of the broad-spectrum radiation emitted by the sun is at or around 600nm, which limits the amount of energy the cell can pull out of that radiation — that’s one of the components of the Shockley-Queisser limit. What Bierman and the others on his team did was to add a step between the sun and the cell: a carefully engineered structure of carbon nanotubes. “The carbon nanotubes are virtually a perfect absorber over the entire color spectrum,” said Bierman . “All of the energy of the photons gets converted to heat.” The team’s thermophotovoltaic cell in action. Normally heat is undesirable in a solar cell, as it’s just waste energy that can interfere with normal operation. But in this case, the heat is not allowed to dissipate; instead, the carbon nanostructure converts the heat back into light — at the exact optimum wavelength of the photovoltaic cell. The result is a huge increase in efficiency, and that’s not the only benefit. Heat, unlike light, is easy to store and move. If the day’s sunlight were entirely converted to heat and stored away, it could be converted to light on demand — like, say, at night. In other words, this technique essentially allows sunlight to be saved for later. Experimental results bore out the theory, and a prototype TPV cell performed as expected. But the tech still needs to make it out of the lab, and manufacturing the complex carbon nanomaterials in bulk is no simple task. So you won’t be using thermophotovoltaics next year or the year after — but the technique is a tremendously promising one and unlikely to be left on the shelf. The team’s research was .
|
Running Through Walls: Dynamic Signal’s Russ Fradin on how good businesses constantly pivot
|
Brian Ascher
| 2,016
| 5
| 24
|
For the inaugural episode of Venrock’s new podcast, Running Through Walls, I spoke with Russ Fradin, CEO and founder of Dynamic Signal, about his long history as an entrepreneur. I have known Russ for years and was also an investor in Adify, where he was founder and CEO. Having founded 3 companies, today Russ also advises several other start-ups as a board member. Our conversation covered a variety of topics, including lessons learned from Dynamic Signal’s early pivot, Russ’s contrarian view on fundraising, and why he’s the only CEO I know who answers every email he receives. [soundcloud url=”https://api.soundcloud.com/tracks/265565439″ params=”color=00aabb&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false” width=”100%” height=”166″ iframe=”true” /] Takeaways from raising 25 venture rounds (3:11) Why it’s not that challenging to be accessible (7:28) How to build a good relationship with VCs (12:48) Why it’s stupid to be stealth in the enterprise world (13:39) The three things that matter the most about culture (16:43)
|
The business case for fighting climate change
|
Michael Gallant
| 2,016
| 5
| 24
|
Chilling news regarding continues to arrive on a regular basis, even as and further troubling global symptoms emerge. It’s a grand shame that global governments seem unable to mobilize to the degree necessary to keep Antarctica safely frigid and other related catastrophes at bay. It’s also a shame that those same governments don’t seem to realize quite how many potent, easily accessed and — interestingly enough — -friendly weapons in the battle they already possess, all too quietly, within their borders. From Tel Aviv to Palo Alto, New York to London, entrepreneurs have created in the last handfuls of years an eclectic arsenal of innovations that are already beginning to show their mettle in the battle against , as well as their potential to help the bottom lines of a wide swath of businesses. Many of these innovations currently fly inches, or miles, below the radar of public and governmental awareness. Yet the needle could well be nudged if such disparate technologies were properly shared, engaged, supported and leveraged on a global scale. Three years ago I was assigned to cover , an intriguing Utah-based company that fights and benefits businesses organically, in multiple senses of the word. EcoScraps works by collecting food waste from grocery stores and wholesale produce suppliers, processing it and cycling it back to market as fertilizer. In our interview, co-founder and CEO Daniel Blake commented that methane emissions — a prime concern addressed by President Obama and Canadian Prime Minister Trudeau in their — come heavily from rotting organic matter in landfills, which in turn can count for up to 8 percent of the greenhouse emissions that humans generate. Forty percent of all food grown in the United States gets thrown away, Blake said, so the potential to reduce that waste, and the greenhouse emissions that result, is tremendous. Plus, Blake pointed out, grocery stores save money on their waste costs, “so everybody wins.” In light of recent predictions, his words take on new meaning. When creative, -friendly solutions to are successfully activated, every one of us indeed wins, even if the triumph is a small or incremental one. EcoScraps was far from the only such solution I discovered, hiding in plain sight, while on assignment. Another is , a pioneering Utah-based company that alchemizes plastic waste back into low-sulfur fuel. PK Clean’s operations — the reactor needed to transmute the plastic refuse is fed by the plastic itself, eliminating the need to wastefully heat and cool the system with every batch — so energy and money are saved. The innovation reduces not only the petroleum-based content of landfills and oceans, but the costs and emissions associated with transporting and storing large amounts of such plastic waste. Businesses and the atmosphere benefit alike from the transaction. The exploding world of data-based innovation holds similar potential to benefit businesses and the climate battle; my recent work within the world of Industrial IoT technologies, and the startup ecosystem as a whole, has allowed me to see this first-hand. The Intergovernmental Panel on that, in 2010, 21 percent of global greenhouse gas emissions were created by industry, 14 percent by transportation, 6 percent by buildings and 25 percent by electricity and heat production. Data-driven technologies exist that can immediately help each of these segments become more efficient, profitable and clean, on a global scale. Innovations like these are really the tip of the proverbial (melting) iceberg. Whether high-tech or organic, digital or dirt-based, diverse and effective ways to battle while increasing profits are already here, already working. They just need to be amplified, furiously. Global leaders in tech, government, and beyond can help. Seeking, promoting, supporting and engaging existing innovations, on a large scale, is the sort of . If a subset of political decision-makers must spend their time arguing and denying, blaming and stonewalling while Antarctica sweats, so be it. But in the meantime, technologies that can benefit both businesses and the battle are here now, working and waiting. For the sake of the rest of us, they must be activated, globally and with great force.
|
null |
Connie Loizos
| 2,016
| 5
| 6
| null |
The intelligent app ecosystem (is more than just bots!)
|
S. Somasegar
| 2,016
| 5
| 24
|
process of using machine learning technology to create apps that use historical and real-time data to make predictions and decisions to deliver rich, adaptive, personalized experiences for users. We believe that every successful new application built today will be an intelligent application. The armies of chat bots and virtual assistants, the e-commerce sites that show the right recommendations at the right time and the latest dating apps are all built to learn and create continuously improving experiences. In addition, legacy applications are becoming more and more intelligent to compete and keep pace with this new wave of applications. Now is an exciting time to be investing in the broader intelligent app ecosystem because several important trends are coming together in application development: We have spent time thinking about the various ways intelligent apps emerge — and how they are built. This intelligent app stack illustrates the various layers of technology that are crucial to the creation of intelligent apps. As investors, we like to think about the market dynamics of major industry shifts, and the rise of intelligent apps will certainly create many new opportunities for startups and large technology companies alike. Here are some of our thoughts on the key implications for companies operating at various layers of the intelligent app stack:
At the application layer there will be two primary classes of applications: net-new apps that are enabled by application intelligence and existing apps that are improved by application intelligence. Net-new apps will need to solve the tough problem of determining how much end users will pay for “artificial intelligence” and how to ensure they capture a portion of the value delivered to users. More broadly, it will be interesting to see if our thesis that the value proposition of machine learning will primarily be a revenue generator comes true. Also, because of the importance of high-quality, relevant data for machine learning models, we think that industry-specific applications or applications for specialized uses will present the most immediate pockets of opportunity at the Finished Services or application layer. Today, we see the main categories of use-case-specific applications as autonomous systems, security and anomaly detection, sales and marketing optimization and personal assistants. We are also seeing a number of interesting vertically focused intelligent applications, especially serving the retail, healthcare, agriculture, financial services and biotech industries. The killer apps of the last generation were built by companies like Amazon for e-commerce, Google for search and advertising, Facebook for social, Uber for transportation and Netflix for entertainment. These companies have a significant head-start in machine learning and user data, but we believe there will be apps that are built from the ground up to be more intelligent that can win in these categories and new categories that are enabled by application intelligence.
Image: Warner Bros. Entertainment As we think about how new intelligent applications will be developed, one significant approach will be the transformation of an “app” to a service or experience that can be delivered over any number of interfaces. For example, we will see companies like Uber build “services” that can be delivered via an app, via the web and/or via a voice interface. It will also be easier for companies to deliver their services across platforms as they design their apps using a microservices paradigm, where adding a new platform integration might be as simple as adding a new API layer that connects to all of the existing microservices for authentication, product catalog, inventory, recommendations and other functions. The proliferation of new platforms such as Slack, Facebook Messenger, Alexa and VR stores will also be beneficial for developers because platforms will become more open, add features that make developers’ lives easier and compete for attention with offerings such as investment funds. Finally, at the interface layer, we see the “natural interfaces” of text, speech and vision unlocking new categories such as conversational commerce and AR/VR. We are incredibly optimistic about the future of these interfaces, as these are the ways that humans interact with one another and with the world.
As companies adopt the microservices development paradigm, the ability to plug and play different machine learning models and services to deliver specific functionality becomes more and more interesting. The two categories of companies we see at this layer are the providers of raw machine intelligence and the providers of trained models or “Models as a Service.” In the first category, companies provide the “primitives” or core building blocks for developers to build intelligent apps, like algorithms and deployment processes. In the second category, we see intermediate services that allow companies to plug and play pre-trained models for tasks like image tagging, natural language processing or product recommendations. These two categories of companies provide a large portion of the value behind intelligent apps, but the key question for this layer will be how to ensure these building blocks can capture a portion of the value they are delivering to end users. IBM Watson’s approach to this is to provide developer access to its APIs for free but charge a 30 percent revenue share when the app is released to customers. Others are charging based on API calls, compute time or virtual machines. The key differentiators for companies in this layer will be the ability to provide a great user experience for developers and the accuracy and performance of machine leaning algorithms and models. For complicated, but general problems like natural language understanding, it will likely be easier and more performant to use a pre-built model from a provider that specializes in generating the best data, models and processes. However, for specialized, business-specific problems, startups and enterprises will need to build their own models and data sets.
Before data is ready to be fed into a machine intelligence workflow or model, it needs to be collected, aggregated, cleaned and prepped. Sources of data for consumer and enterprise apps include photos and video, websites and text, customer behavior data, IT operations data, IoT sensor data and data from the web. After applications are instrumented to collect the right pieces of raw data, the data needs to be transformed into a machine-ready format. For example, companies will need to take unstructured data like text documents and photos and transform it into structured data (think of rows and columns) that is ready for a machine to review. The important part of this step is realizing that the quality of a model is highly dependent on the quality of its input data. Creating bots or “artificial intelligences” without high-quality training data can lead to unintended consequences (see Microsoft’s Tay), and the creation of this training data often relies on semi-manual processes like crowdsourcing or finding historical data sets. The other area of this space to keep an eye on is the companies that have traditionally served as “dumb” pipes for data sources like clickstream data or application performance logs. Not only will they try to build predictive and adaptive features, they will also see competition from intelligent services that draw insights from the same data sources. This will be an area of innovation for finance, CRM, IT Ops, marketing, HR and other key business functions that have traditionally collected data without receiving immediate insights. For example, HR software will become better at providing feedback for interviewers and highlighting the best candidates for a position based on historical data from previous hires.
Image: Flickr/ under a The amount of data in the world is doubling every 18 months, and thanks to this explosion in big data, enterprises have invested heavily in storage and data analysis technologies. Projects like Hadoop and Spark have been some of the key enablers for the larger application intelligence ecosystem, and they will continue to play a key role in the intelligent app stack. Open source will remain an important feature for choosing an analytics infrastructure because customers want to see what is “under the hood” and avoid vendor lock in when choosing where and how to store their data. Within the IaaS bucket, each of the major cloud providers will compete to run the workloads that power intelligent apps. Already we are seeing companies open source key areas of IP such as Google’s TensorFlow ML platform, in a bid to attract companies and developers to their platform. Google, in particular, will be an interesting company to watch as it gives users access to its machine learning models, trained on some of the world’s largest data sets, to grow their core IaaS business. Finally, hardware companies that specialize in storing and managing the massive amount of photos, videos, logs, transactions and IoT data will be critical to help businesses keep up with the new data generated by intelligent applications. We think there will be value captured at all layers of this stack, and there is the opportunity to build significant winner-take-all businesses as the machine learning flywheel takes off. In the world of intelligent applications, data will be king, and the services that can generate the highest-quality data will have an unfair advantage from their data flywheel — more data leading to better models, leading to a better user experience, leading to more users, leading to more data. Ten years from now, the vast majority of applications will be intelligent, and machine learning will be as important as the cloud has been for the last 10 years. Companies that dive in now and embrace intelligent applications will have a significant competitive advantage in building the most compelling experiences and the most valuable businesses.
|
Uber and Toyota confirm strategic investment and auto leasing deal
|
Sarah Buhr
| 2,016
| 5
| 24
|
Rideshare wars just got even more interesting. has confirmed a strategic investment and auto leasing deal from . “Toyota is a global leader in the automotive industry and Toyota vehicles are among the most popular cars on the Uber platform worldwide,” Uber said in a statement to TechCrunch. “We are proud to partner with Toyota in a variety of ways, including the expansion of our vehicle financing program.” Uber would not disclose terms of the deal or the financing program and there aren’t many details yet, but according to a from the car manufacturer, Toyota is interested in exploring the future of transportation with Uber and the companies have “entered into a memorandum of understanding (MOU) to explore collaboration, starting with trials, in the world of ridesharing in countries where ridesharing is expanding, taking various factors into account such as regulations, business conditions, and customer needs.” The Toyota leasing deal helps Uber expand its financing program for Uber drivers, Uber Xchange, but will also bolster Uber’s moves into a self-driving vehicle future. Though self-driving didn’t get a mention, a slew of major car manufacturers and tech companies have started investing in rideshare services and Toyota is a big player in self-driving innovation. The car manufacturer announced last November it was putting forth $1 billion for the creation of the , which is a new company established to develop AI and robotics for self-driving capabilities. Note GM in Uber rival earlier this year, with plans to work with the ridesharing company on self-driving innovation, as well. Also, Volkswagen with the New York-based ridesharing startup Gett this morning and Apple, which is rumored to be building its own self-driving vehicles, added a to the recently renamed Chinese rideshare service earlier in May.
|
ChargePoint raises $50 million to charge more cars
|
Kristen Hall-Geisler
| 2,016
| 5
| 24
|
announced this month that it had raised $50 million in its latest round of funding, thanks in large part to . That makes a total of $164 million from investors like BMW iVentures and Siemens, among others. ChargePoint already has the largest EV charging network in North America, with 28,000 stations installed since 2007. According to the , global sales of plug-in vehicles rose by about 80 percent in 2015, led by a surge in sales in China. (In the United States, EV sales actually dropped by about 3 percent.) In 2011, about 50,000 EVs were sold worldwide; in 2015, the number topped 565,000. Tenfold in four years isn’t too bad. Linse Capital noted in the announcement that the additional funds will allow to expand its network outside North America. “This funding will accelerate ChargePoint’s ability to capitalize on that position in the North American market and replicate it in other markets around the world,” said Michael Linse, managing director of Linse Capital. It’s worth noting, too, that the (and the world) for 2015 was the Tesla Model S, which has a range of at least 240 miles as rated by the EPA. Number two was the , which has an EPA-rated 107-mile range. It’s also half the price of a Tesla Model S. It’ll be close to the price of a , if you can wait a couple of years for that car’s expected 215-mile range. Many of the electric cars available now, from the to the , have in the Leaf’s neighborhood. ChargePoint’s mission is to ease the range anxiety these owners may feel, even though the average American drives 29.2 miles per day, according to a study. Its 28,000 charging points (thus the name) are installed in business parking lots, apartment garages and in homes with Level 2 charging; 310 of the stations have Level 3 Express DC fast charging. In the chicken-and-egg game of electric vehicle adoptions, ChargePoint is supplying many of the chargers that will make EVs more practical for more people in the future.
|
Microsoft tries its hand at a news bot with Rowe
|
Sarah Perez
| 2,016
| 5
| 24
|
Like , Microsoft its bots. Now, the company has rolled out its own news-finding bot called “Rowe,” which lives inside the latest version of Microsoft’s Bing-powered personalized news reading app, . Rowe is an experiment with helping you keep up with the news that matches your current interests. You can ask the bot to show you news by typing in a topic, view today’s headlines, ask for other personalized suggestions or read the stories the bot has surfaced for you already. Rowe, however, seems more like an assistive search engine, rather than a true AI-like bot, as its “personalized” suggestions are not as good as its ability to return articles on a given subject. And even then, its results are a bit limited. For example, if you type in a popular, but broad, subject like “U.S. Elections,” the bot returns just three top stories, one of which currently appears to be more of an op-ed/thought piece rather than hard news. That’s not a great experience. Below its recommended stories, buttons appear that let you pull up more news articles on the subject, like those focused on “election predictions” or “polls,” in this case. Meanwhile, if you type a news topic that’s a bit more specific, the selection of stories may improve. For instance, asking Rowe about “Twitter 140” — a reference to — the bot returns popular stories from well-known sites like Yahoo, CNET and PCMag. Rowe has one other weird and not entirely practical trick, too — if you upload a picture of yourself, it will surface news articles where the person in the story looks like you. Why? Uh, because it can? (Oh, and prepare to be either very flattered or very insulted by its results.) The bot — initially spotted by the blog following Microsoft’s — is the latest development from Microsoft’s News Pro app, which A sort of standard news-gathering app, News Pro is Microsoft’s own take on something like Apple News, or the third-party app Smart News, perhaps. The app offers users a customized experience by connecting to your Facebook and LinkedIn in order to better understand your interests. In practice, News Pro doesn’t do a great job at personalization yet, I’ve found. While it accurately suggests stories from areas like “computer hardware” and the “Internet” for me, it misses a number of possible suggestions that could be easily pulled from my ever-growing set of Facebook likes. [gallery ids="1327239,1327238,1327237,1327240"] Similarly, the bot feels rough around the edges, too. But this is not an “official” Microsoft product, we should point out — it’s an app from the company’s internal R&D incubator, Microsoft Garage. Microsoft is hardly the only company experimenting with new ways to deliver the news via bots, however. Other efforts in the space include those running on , like bots from CNN, The WSJ, Business Insider or even Telegram’s ; or independent efforts like , and many more. The updated app also introduces other features, including groups for discussing the news with others, for example. Rowe is available in the updated News Pro app
|
Uber is testing a loyalty program that rewards riders with free trips
|
Fitz Tepper
| 2,016
| 5
| 24
|
Uber sent an email to LA riders yesterday announcing they are doing a temporary promotion that looks very much like a loyalty program. Each UberBLACK ride taken will give riders 200 points, and when they amass 3,000 total points they will be rewarded with a free $25 Uber ride. Essentially, the program is buy 15 get 1 free. While the company has tested “ ” loyalty programs in New York and Dallas, they were structured to reward frequent riders with better cars and special events — neither were loyalty programs in the traditional “buy X get 1 free” sense. This is because Uber has never really had a reason to implement such a program — retention and ridership is at an all-time high, and the company really has had no reason to mess with its success. Let’s take a quick look at the economics, assuming your average UberBLACK ride is $40. Fifteen rides means you’re spending a total of $600; and you’re getting $25 in return — giving you about a 4 percent “rebate” on Uber rides. Not bad, especially considering the alternative is nothing at all. We reached out to Uber, who confirmed that that the program is a test for the company, and is the first ever loyalty program on the West Coast. So why did Uber decided to finally implement a rewards program? Since it’s only for UberBLACK, it’s safe to assume that the company is trying to increase rides in that category, at least in Los Angeles. As riders have flocked to UberX for its low prices, UberBLACK drivers have been the ones most affected by the lost business. This is especially true in cities like Philadelphia, where UberBLACK drivers have actually against UberX. The promotion is only available through July, and riders have to opt-in by entering the code LOYALUBERBLACK. If it’s successful, it’ not hard to imagine Uber rolling out this program to other cities ( ) where UberBLACK ridership is suffering.
|
Reddit’s head of community leaves after nine months
|
Megan Rose Dickey
| 2,016
| 5
| 24
|
About nine months after taking on the role of head of community management at Reddit, Kristine Fasnacht ( ) has left the company. In a post titled, “Admin released from captivity, reintroduced to Reddit community,” : I am sad to share that is leaving Reddit. Over the past five years, she has done an incredible amount of work for us and the Reddit community. She has been the face of our Community team at Reddit; helped us write many of our policies and was indispensable working through countless tricky situations; and she lead our efforts in support of Extra Life, raising hundreds of thousands of dollars for Children’s Miracle Network Hospitals. She has been a friend to me and many others here. While we are sad to see her go, we wish her the best going forward. Actually replacing Kristine is impossible, but next Monday, four (maybe five!) new members of our Community and Trust and Safety teams will be starting, which will give the team even more horsepower. Reddit has been in turmoil for the last year or so, losing after . Fasnacht came on as head of community in August, following the , who left last July. Before taking on that role, she was temporarily managing Reddit’s Ask Me Anything Q&A sessions following the firing of Reddit’s director of talent, Victoria Taylor. In a meeting with Huffman yesterday, he mentioned that half of the 11-year-old company, which entails about 130 employees, has started in the last six months. About a year ago, Reddit employed somewhere between 70 to 80 people, Huffman said. Within the last year, Reddit has created a Trust & Safety team, an anti-evil team for creating mandates around bullying, spam and account takeovers, and the community team has doubled. To be clear, my meeting with Huffman was not related to this piece of news. In response to an inquiry about Fasnacht’s departure, Reddit VP of Marketing Celestine Maddy confirmed that Fasnacht left the company about 20 days ago. “Upon leaving, some members of the community team post to r/modsupport and answer questions in the comments about their departure,” Maddy told TechCrunch via email. “Kristine’s post is . Her user name is r/krispykrackers so you can identify her communication in the thread.” I have reached out to Fasnacht and will update this story if I hear back. In response to someone on Reddit, .”
|
Ex-Facebook designers climb charts with adorable game Pinchworm
|
Josh Constine
| 2,016
| 5
| 24
|
Pinchworm’s developers Drew Hamlin and Joey Flynn (from left) taps into that same frustration, and how we’re compelled to exceed our past failures. There ares only two types of obstacles, so it seems absurd when your mind can’t process whether to jump or dive fast enough. Every time you screw up in Pinchworm, you’re sure you could do better next time, and you keep playing. And since Pinchworm is the only game where you pinch and pull to play, it creates both word of mouth buzz as people ask “what the hell are you playing?” and a sense of camaraderie. Once you try it, you can tell someone else is playing Pinchworm even if you can’t see their screen. “ Unlike most mobile arcade games where there’s little to no story, Flynn and Hamlin worked hard to create emotional resonance with your character. But you only get to see what happens next if you score high enough to unlock the next chapter. Uhhh, sure. But the sassy taunts of the bird carrying away your significant worm lover do motivate you to hit the elusive 100 point mark and rescue them. While you polish your skills to get there, you’ll also be delighted by a variety of little design flourishes. You can earn or buy different themed worms that come with their own backgrounds and obstacles, from a glow-in-the-dark worm that inches in night-vision, to a hipster worm named “Indie” that has to dodge mustaches and vinyl records. Drag your finger around the title screen to make your character’s eyes move. And Pinchworm even pulls your location to show the right time of day and moon phase in the background. Flynn and Hamlin plan to keep supporting Pinchworm as they ponder other apps to make. “ .
|
Gulülu turns drinking water into a game for kids
|
Haje Jan Kamps
| 2,016
| 5
| 24
|
Through the power of smart sensors and the magic of the cloud, children being parched is on its way to becoming a thing of the past — at least, if the team has anything to say in the matter. The company’s bottle, launching on Kickstarter today, is the newest, most high-tech weapon in the battle to get kids to imbibe enough liquids throughout the day. The Gulülu pets that live in the water bottles. Cute, no? The idea is to turn the act of drinking water into an integrated game. The water bottle has a small screen built in, and additional sensors mean that shaking two bottles in close proximity to each other make the pets “friends,” enabling them to interact. The bottles have rechargeable batteries and a wireless charging dock to keep the battery topped up. The company claims the Gulülu will stay juiced for four days before another recharge is required. Shaking two bottles next to each other makes the pets “friends,” enabling them to interact with each other. The bottles use multiple embedded sensors to measure not just how much water is disappearing from the bottle throughout the day, but also to attempt to determine whether the kid is actually rehydrating, or trying to be cheeky, pouring the water out in the flower bed outside the school. The idea is that only actually drinking is rewarded, and that the digital pets thrive best if the bottle’s owners use them as intended. Part of the theory behind Gulülu is to integrate water drinking into everyday activities. Gulülu has prototypes ready to go, and to get the product through the final production phase, with early-bird pricing starting at $89, followed by regular-priced Gulülu bottles at $99 each. The company estimates a shipping date of September, and the recommended retail price will be $129 once the products start making an appearance on shelves. It’s an interesting product, and it’s easy to imagine a world where every child in your vicinity is running around shaking their Gulülu bottles at each other and the product turns into a huge hit. For that to happen, however, two things need to click into alignment. Parents need to both feel that hydration is a big enough problem to solve and decide that a $129 water bottle is the right way of going about tackling the issue. My gut tells me that Gulülu is an expensive solution to a non-existent problem. Time will tell, and it’ll be interesting to keep an eye on the Kickstarter campaign either way.
|
Apple prepping Siri SDK and Echo-like home assistant gadget
|
Devin Coldewey
| 2,016
| 5
| 24
|
Apple is preparing an SDK that would allow developers much greater access to Siri — and the improved assistant will power a stationary hub device like Amazon’s Echo. A report from tallies with things we’ve heard over the last few weeks; expect an announcement, if not the device itself, at in June. Siri, sad to say, has not aged well. Features that were impressive four or five years ago have lost their luster, and rival services have leapfrogged the famous virtual assistant in countless ways. Siri’s creators, of course, moved on years ago to work on something better: Viv, which they . Apple’s insistence on controlling the Siri ecosystem ensured a smooth launch and few surprises, but it has severely hampered her usefulness. Not everyone wants to use the services Apple has struck deals with, and of course if there’s a hot new app, chances are slim to none Siri will hook into it. A Siri SDK open to all developers (though likely with serious limitations, in true Apple style) would go far to make an adequate virtual assistant into a powerful and convenient one. And, as Apple works at shoehorning itself into households with HomeKit, a versatile voice-activated hub seems a natural addition. It seems highly likely that Apple will take a high-end approach to the space, rather than provide an inexpensive, minimal product like Apple TV. The new device will likely go for significantly more than the cost of an Echo — $300 would be my guess — and will improve on the aspects that are important to Apple users: design and service integration. Expect to be sold on the quality of the speaker and a design will doubtless introduce in a video replete with lingering slow pans. Then expect to hear how closely Apple has worked with [PowerPoint slide of Internet of Things partners], and of course how it works seamlessly with iTunes, email, iCloud storage and so on. They might even slip in a little sass regarding the motives of Amazon and Google: The one, you may hear, wants to sell you things, and the other wants to know your every move. Apple, of course, just wants to make your home a nice place to be.
|
Placemeter’s urban intelligence platform gets smarter
|
Frederic Lardinois
| 2,016
| 5
| 24
|
helps businesses and municipalities measure what’s happening in front of their stores, on their streets and in their parks. In its , the service, which relies on real-time video feeds, was able to quantify the overall number of objects that it saw and distinguish between pedestrians and vehicles. Now, the service is getting significantly smarter. By default, Placemeter can now distinguish between five different objects: people, bicycles, motorcycles, cars and large vehicles (think trucks, delivery vans, etc.). Traditionally, the way cities measured traffic was by simply placing meters on bike paths or streets; everything that passed over them was either measured as a bike or a vehicle. Now, they will be able to get a far more granular and accurate count in real time. The Placemeter team piloted this project with the City of Paris and in collaboration with . Specifically, the City of Paris wanted data for redesigning a public plaza. “Initially we were just counting unclassified objects, but we realized quickly that we wanted to understand more granularly what exactly was passing through the space so we could understand where a bicycle lane should go versus where pedestrian benches should go,” Placemeter CEO and founder Alex Winter told me. “Nothing outside of computer vision can give you this level of granularity.” Working with the City of Paris, Placemeter also adapted its model to count swimmers in a public pool. As Winter explained, the team decided to use a more traditional computer vision approach instead of using deep learning to classify these objects. “We need to be able to quickly learn categories without the constraint of training on millions of data points,” he told me. While the system currently recognizes five different object types, the team hopes to expand this to 15 or 20 over time. The team also noted that it has to tweak its algorithms for different locations, too. It had to specifically train its system to recognize certain scooter and car shapes in Japan, for example. While Placemeter offers its own sensor, the system can work with virtually any camera input, even if it’s relatively low-res. The advantage of using Placemeter’s own sensor is that the computation happens right in the camera and the actual image isn’t stored anywhere. In Paris, though, the team worked with Cisco cameras and the analysis happened in a local cloud in France. Placemeter offers three pricing plans. Users can pay $30 per month and camera for basic analytics and a single object class. For $60 per month they get two object classes (pedestrians and vehicles) and for full access to all five classes, they have to pay $90 per month. In the long run, the team plans to bring about a dozen classes to its $60 plan and 20 to the $90 one.
|
Southeast Asia financial comparison startup Jirnexu lands $3M to expand to digital banking services
|
Jon Russell
| 2,016
| 5
| 23
|
Malaysia-based financial comparison startup Saving Plus has closed a $3 million Series A round to move into new digital banking services. The company has also renamed itself . The company was founded in 2012 by its CEO and former Citi banker Yuen Tuck Siew to provide better financial choices for consumers via banking comparison sites. Arriving home to Kuala Lumpur following more than decade in the UK, Siew struggled to rebuild his personal finances in the same way as he’d done when he moved to the UK, where the likes of provide clarity and options for consumers. Jirnexu operates comparison sites in Malaysia and in Indonesia, but now it is branching out into services for banks with XpressApply, a platform that lets financial institutions tap the internet and digital media to reach consumers. More specifically, the service, a white-label version of which will launch in the second half of this year, is used to handle credit card, loans and other banking applications online. This new funding is led by DMP with participation from Celebes Capital, NTT DOCOMO Ventures, Nullabor, Tuas Capital Partners and Anfield Equities, and has been earmarked to develop XpressApply and other digital-first products for banking. Jirnexu is working with banks and financial organizations to offer XpressApply as an online solution that can reduce the process of applications to around 10 minutes. Beyond removing paperwork, it helps consumers get quicker and more accurate decisions and, for lenders, it allows for closer engagement with potential customers. Siew said that, by cutting out call centers, customer conversation rates can be as much as 200 percent higher, while call center complaints are seven times higher than initial complaint volumes from XpressApply, he claimed. “We want to change the way banks and insurance companies behave [and] make sure that the consumer can get what they want online,” he added. Despite a push into digital services for banks, Jirnexu remains committed to the consumer-facing side of the business. The startup has raised $4.5 million to date, and Siew is working towards a Series B round later this year that would be used to expand its online comparison sites into more countries in Southeast Asia, and develop other bank-focused products. Already, he said, the two comparison sites reach “tens of millions of visitors”, generate 450,000 leads for banks and “tens of thousands of approved customers and accounts”. But, with the launch of XpressApply, Jirnexu is trying to bring a more Western and on-demand approach to personal finance in Southeast Asia. “Southeast Asia is riding a boom in internet and mobile offerings,” Siew said in a statement. “Whether you book a ride with Grab or shop online with Lazada, consumers are expecting the same anytime-anywhere access to all services including their personal finances. “The first financial services company who can meet the consumers’ rising expectations will be the winner. That’s why my vision is clear and simple — I want to build the Amazon of personal finance in Southeast Asia, a full stack technology driven platform that enables service and value leadership to the consumer.” As for that rebrand, Jirnexu means “prosper” in Maltese. Siew said the company came up with it with a little help from an unnamed investor, and that it was important to find a name that was both symbolic and not likely to be misconstrued in other languages. That can be challenging in Southeast Asia, where there is plenty of cultural and linguistic fragmentation across the region’s biggest six countries.
|
Twilio ramps up mobile play with programmable SIMs for IoT and handsets with T-Mobile
|
Ingrid Lunden
| 2,016
| 5
| 24
|
The service is getting formally announced today at the company’s developer conference, . But if you follow Twilio’s CEO on Twitter, you might have seen his hint of what was coming a couple of weeks ago: Pretty excited for SIGNAL… — Jeff Lawson (@jeffiel) In an interview, Lawson told me that the service will initially be done in partnership only with T-Mobile, and only in the U.S., but if you consider that Twilio itself has a massive international footprint (as does T-Mobile), there is a lot of scope for expansion. T-Mobile is coming into the new service by way of its Un-carrier effort — its new strategy to court developers and newer users with lower prices and more open infrastructure. Twilio’s business growth has been based on a double attraction. First, it provides access to using certain kinds of services more easily to developers, who suddenly no longer needed to have extensive telecoms engineering experience to program features like phone and texting services for their employees, or to use in websites or apps. (How easy? A former Twilio employee who worked in the marketing and PR department once developed a pretty awesome music service, , using the phone API. You can reach it .) Second, using these services is typically a lot less expensive than working to get the same service by way of the telco directly. Twilio did this by buying services wholesale to then route through its API, relying on economies of scale to make up for the thin margins. This is more or less the same basis of the new SIM service, but Lawson believes there will be another reason for why it will be attractive to developers to use, and that is around user experience. For now, a lot of IoT services are based around getting a device hooked up to Wi-Fi, but generally, cellular networks are infinitely more reliable and easy to hook up. This is something that companies like SigFox are also banking on, building out cellular networks specifically dedicated only to IoT. While IoT will definitely be one component of where Twilio sees this developing, what’s interesting is how it’s also considering the B2B2C proposition here. Up to now, there have been precious few successes in the MVNO space, and given that the only ones that have flown have ended up getting acquired by carriers, some might even argue that no MVNOs have really been a hit. It will be worth watching whether Twilio can help the economics of that shift with its new SIM service. For now, Lawson tells me that the most immediate users of its SIMs for phone-based services are likely to be enterprises, which might develop phones for their workers that are customized with a very limited set of calling options, or are simply more easy to track in a bigger bring-your-own-device program, where workers provide their own handset or choose whichever handset they want to use, but get the SIM put into it. There are still details to be filled in with Twilio’s new service. Pricing — specifically how people will respond to it — is one of the biggest, but so is security and how its SIM will work with third-party programs to help track and lock down phones in an enterprise network. But, as the company reportedly , adding one more service that both builds on the suite of API-based communications services that it already offers to users — and expands them with what effectively becomes a new platform for the company — is a step the company needed and is right to take. (No comment from Lawson in our interview about IPOs or other liquidity events.)
|
King Bach launches Bachify, a photo editing app for the social media obsessed
|
Fitz Tepper
| 2,016
| 5
| 24
|
Andrew Bachelor (better known as , where he has more than 15 million followers) has just launched , his own iOS-based photo-editing app. The genesis of the app was an issue anyone who is a heavy social media user faces — there isn’t one app that provides a comprehensive photo-editing experience. Bach explained that he “always had to go to multiple editing apps to create the perfect picture.” So Bachify includes almost every editing tool under the sun, including stickers, Instagram-like filters and standard adjustments like enhance, blur and red-eye. It also offers more nuanced editing tools like a make-up brush, “pimple remover” and smoothing. The stickers include hundreds of decorations, like hairstyles, accessories and dozens of HD stickers of King Bach you can add to your image. The app also lets you create your own stickers, so you could even add yourself to a photo you weren’t in. Most of these are available with the basic version of the app, but more will be offered via in-app purchases. While this isn’t King Bach’s first app (he previously made a game called Bachy Birds), it’s part of his plan to use his talents and social media expertise to expand into the business world. But why a photo-editing app, when King Bach attributes most of his professional success to Vine, a video platform? He explained that photos are much more universal than videos. “While video editing tools are only used by a small subset of people, everyone would love to have the ability to edit photos.” Bach isn’t worried about entering a market as competitive as the photo space, where established players like Instagram and Snapseed reign supreme. He explained that before setting out to build Bachify he had tried all the alternatives, and just wasn’t able to find a high-quality all-in-one solution that met his needs. The app’s success most likely will depending on if King Bach can recruit users from outside of his existing sphere of influence — essentially users who use the app because it provides a solid editing experience, not because it was made by their favorite social media star. Bachify is available for and , and can be downloaded from the iOS App Store.
|
Crunch Report | Skip to Good Parts in Facebook Live
|
Khaled "Tito" Hamze
| 2,016
| 5
| 23
|
Tito Hamze, Jason Kopek
Tito Hamze
Yashad Kulkarni
Joe Zolnoski
|
Can startups disrupt the $20 billion cyber insurance market?
|
Mahendra Ramsinghani
| 2,016
| 5
| 23
|
Over the past few years, markets have been growing at between 25-50 percent CAGR each year. According to , annual policy premiums are approaching $2.75 billion. The ecosystem of underwriters, intermediaries/brokers, analysts/management consultants and compilers of information is evolving rapidly, trying to make the most of this rising tide. As large underwriters try to grapple with , newcomers aim to this ecosystem. And the battle has just begun. Enterprise risk management is a pressing issue at the C and board levels. Risks of business disruption rank the highest on charts. While natural catastrophes and political issues cause disruption, these are relatively better understood than risk. According to , c incidents are considered the No. 1 emerging risk for the long-term future. (Source: Allianz 2016 survey of top business perils; 800 risk managers from 40 countries). With the growth of intruders and malicious actors, corporate risk managers are being pushed to adopt a better risk posture. Purchasing often is the first step. When you have many eager buyers, a willing seller often emerges quickly to oblige such enthusiasm. According to AIG, underwriters collected $1.6 billion in premium income in 2015. Allianz projects premium income to grow to billion by 2025. The average take-up rate for is 24 percent across U.S. businesses. (Source: ). Only ~40 percent of Fortune 500 companies have procured against incidents, and those with typically purchase limits that do not cover the full extent of their exposure. There are more than 18,000 mid- companies with revenues from $250 million upwards in professional services, retail and manufacturing verticals that will need coverage. It is evident that the forces of “ pull” are driving rapid growth in markets, even when entrants are relatively unsophisticated. Some water-cooler conversations around security include statements like “nobody knows anything — but wow — it’s already a billion-dollar ” and “this is the best thing for our industry since fire.” Enterprise customers are eager to buy coverage, but struggle with understanding their risk factors. According to a , 48 percent confessed to a lack of understanding of complexity of risks, preventing them to be better prepared against such risks. And as much as 46 percent did not have concrete assessment of costs of risks involved. The key questions enterprises struggle with are: (a) What is at risk for our enterprise — is it business continuity? Will we be DDOS’ed? Or experience intellectual property theft? Do we hold consumer/financial/patient data?; (b) What is the probability of an event occurring?; and (c) What are the estimated damages? Like most nascent arenas, development of a common taxonomy and risk assessment framework is much needed. The U.S. government’s is a starting point, but more needs to be done. Within such a framework, various tools, technologies and practices be optimized to measure and assess risk. The first wave of innovation is to offer risk metrics and aspire to become the “FICO Score” provider. that have leaped in to address risk metrics include , and Governance Risk and Compliance (GRC) Vendors have pivoted into the space and offer vendor risk-management tools. The gold rush has just begun. Despite strong demand, the challenge for insurers is how to craft their offerings and how much to offer. Forecasting loss ratios and product profitability in such a nascent is a challenge, not to mention allocating appropriate risk capital for long-tail catastrophe events. Actuarial tables based on 100 years of historic data be used to build premiums models, and predict earthquake and flood risk. But do the likes of and have such data for risk? Not quite. has leaped into this space and aims to be a formidable player. “We track 800K security events every second” says Roxane Divol, SVP/GM, Trust Services Website Security, and Executive Sponsor for – , Symantec. The company has hired actuaries who are blending historical and real-time data and creating to address this specific challenge. The timing, scale and nature of risk is uncertain and dynamic. How will this impact premiums as risks fall or evolve over time? And in many cases, we don’t even know what’s going on — take a look at the patterns of incidences for 2016. “Miscellaneous Errors” and “Everything Else” are almost 30 percent of the incidents. (The dynamic nature of threats over time. Source: ) At the macro level, how do providers estimate their “probable maximum loss” (PML) in incidents? And when several parties get impacted, how is first-party/third-party liability assessed? If I unknowingly forward a malicious file to another party, am I at risk? The timing of claims also be a pain point for the insured. As one expert bemoaned, we know there is fire when we see smoke. What about ? Intrusions go can undetected for 300 days or more. So how does an enterprise risk officer apply this logic to selecting the right coverage, negotiating premium amounts and exclusions? What claims be denied? How does a nation-state attack (North Korea/Sony) come into play? According to a (November 2014), more than 50 percent of underwriters do not have dedicated people for . Even the intermediaries/brokers struggle. In a watch , the Council of Agents & Brokers found that 71 percent of brokers believed there was little to no clarity about what is covered. Which means the brokers are more or less operating in the dark. The Council further reports, “Much rests with the individual broker’s ability to grasp exposures and coverage nuances and be able to intelligently discuss these with individual clients whose interest levels vary greatly. The two major points of contention are lack of a standard terminology and difficulty in spotting exclusions.” If we assume our CGL (Commercial General Liability) or D & O (Directors and Officers) policy coverage is sufficient, we may be in for a rude shock. When DSW, a shoe retailer, got hacked, AIG attempted to deny coverage and argued the loss was excluded. The court disagreed and DSW was entitled to coverage, but not without a legal battle. In summary, enterprise America will soon need a simpler way to identify the nuances of: (a) what does my policy cover, its relevance to my business-risk and exclusions; (b) which underwriter is the most sophisticated; and (c) and how I identify ways to reduce premium costs? Over time, an online marketplace may evolve. Underwriters do not put much value on usage of security products/tools. A Hanover survey from November 2014 shows an interesting pattern — the most important information is risk management philosophy, closely followed by nature of data stored. The “Updated Network Security/Firewall” seems somewhat laughable and, unfortunately, “Encryption” has very little importance. This needs to change. As companies become better informed about the importance of various security technologies, the criteria for underwriting could become more relevant. Silicon Valley create not one but many disruptors in the space. Yet Silicon Valley should know that technology alone cannot solve all problems. People, practices and policies matter. It may be a while before we realize that Utopian dream of security working silently, automagically to protect us despite our follies, foibles and idiosyncrasies. When that happens, we will not need . For now, this is waiting to be disrupted by some 19-year-old wunderkind.
|
Utah representatives want to install porn blockers on all cell phones
|
Sarah Buhr
| 2,016
| 5
| 23
|
has proposed a bill to rid the state of porn by adding Internet filters and anti-porn software on all cell phones and requiring citizens to opt-in before viewing porn online. It’s to save the children, he says. Weiler successfully pushed an through the state Senate earlier this year, declaring porn a “public health crisis.” He now hopes to take his movement a step further by making it harder for Utah citizens to have access to digital porn. “A cell phone is basically a vending machine for pornography,” Weiler told TechCrunch, using the example of cigarettes sold in vending machines and easily accessed by children decades ago. The senator says England was successful in blocking porn on the Internet. Prime Minister David Cameron pushed legislation through in 2013 requiring U.K. Internet service providers to give citizen’s the option to filter out porn. However, it looks like England’s Internet porn laws have , with some programs blocking rape crisis centers, sex-ed sites for children and sites actually offering help to people with a porn addiction. Even if the bill passes, putting it into action also seems highly impractical — it would require major ISPs and cell phone makers to add special porn filtering software just for Utah citizens. Plus, there are already systems available to parents, such as Google Safe Search and Microsoft Family Safety, which allow parents to set up permissions for their kids on websites, games, apps and movies. Pete Ashdown, the founder of Xmission, a local ISP provider in the state, called the proposal “unrealistic,” comparing it to censorship in China. “The Chinese government has poured hundreds of millions of dollars into censorship and have failed at restricting what their people can see. I don’t see Utah doing any better,” Ashdown said. Weiler says he doesn’t know how it would work but just wants to put the idea out there and that his main concern is kids looking at porn. “The average age of first exposure to hard-core pornography for boys is eleven years old,” he said. “I’m not talking about seeing a naked woman. I’m talking about three men gang-raping a woman and pulling her hair and spitting on her face. I don’t think that’s the type of sex ed we want our kids to have.”
The state also seems hyper-focused on pornography, compared to other states, with advertisements and groups taking up arms against porn. Billboards offering help and counseling to those with a “porn addiction” line the major metro areas along the freeway from Ogden to Provo. contends pornography breaks up marriages and the family — something central to the tenets of the state’s main religion, Mormonism. The group has more than a million Facebook fans and also claims porn resembles a drug addiction in the brain, leaves you lonely and can ruin your sex life. The does not recognize porn to be an addictive disorder and efforts to recognize “sex and Internet addiction” as a thing have been rejected . That’s not to say viewing of porn doesn’t affect a marriage or relationship and parents do have the hard job of talking to their kids about and protecting them from accessing a plethora of potentially confusing and detrimental content online. But is this a real issue in the highly religious and mostly conservative state? One bit of research claimed in the country. Another study from Pornhub put for looking up porn. It seems requesting all ISPs and cell phone makers to create filters to keep porn away from Utah citizens is little more than good PR for conservatives in the state. Kids will find things you don’t want them to — both online and off. Filters are an imperfect solution. Measures to censor don’t work in China and they haven’t worked in the U.K. The legislature would serve citizens better by squashing this preoccupation with pornography and instead putting a real effort into educating kids — and possibly the adults in Utah — about what is healthy sexual behavior. Or focusing on a real public health crisis like the in the state.
|
Non-profit RideAustin looks to fill void left by Uber and Lyft
|
Devin Coldewey
| 2,016
| 5
| 23
|
Charity or opportunism? It’s so hard to tell, sometimes. In case, it looks to be a bit of both. It’s a non-profit car-for-hire service that’s happy to abide by the city’s rules — and adds a layer of budget- and conscience-friendly features that may help it stay differentiated (and alive). With Uber and Lyft , , after restricting rideshare companies, the stage is set for another company to swoop in and provide the much-needed service. RideAustin, launched today at an event at the city’s famous Alamo Drafthouse, is positioning itself to be that white knight. RideAustin’s not-for-profit status means that, at least in theory, costs and overhead can be lower, since there’s no need to maintain profit margins. A poorly managed non-profit, however, can still suck up money like anything else, so the proof will be in the pudding. The company also allows riders to round up their fares to the next dollar, donating the remainder to a local charity — a feature that really should be more common just about everywhere. It doesn’t look like you can tip in the app, however, so when you ride. Lastly, surge pricing will be — get this — optional. Instead of paying the premium, you can opt to wait — you’ll remain in drivers’ queues, but people paying surge price will get picked up first. We’re not really clear on the details here, and the feature can avoid being gamed. For instance, what if everyone just opts out? Other questions occur to one as well. How will they get enough drivers to sign up? What’s pricing going to look like? Did they really upload a picture of the app with no drivers in it to the App Store? I’ve sent over all of these questions — well, most — and will update this post when they get back to me. Naturally, RideAustin will comply with the fingerprinting and background checks required by the city, along with the rest of the new regulations. The startup emerged in the last two weeks, its backers perhaps sensing this might be the only time a small player can realistically expect to gain serious mindshare and market share. It’s not the only one, though, : established but smaller apps are in the mix, and there’s another homegrown contender in , which is also taking the charitable route. Warp plans to donate 25 percent of its profits to local non-profits (though presumably not RideAustin), and if someone drives for more than 40 hours a week, the company won’t take a commission. Full-time drivers will appreciate that. Warp, however, is a purely hypothetical service at this point, having raised only a fraction of the $40,000 it is . RideAustin, on the other hand, has its app available and plans to start operations in June. Where’d it get the money to do that? the company is running on “private donations from locals,” and that companies have pre-paid for rides to grease the wheels, so to speak. (I asked for more info on this as well.) It’s wise to strike while the iron’s hot, but the sword of Damocles is already forged and hanging over their heads by a thread which Uber or Lyft could cut at any time. That’s not likely to happen, however: the companies have time and cash on their side, and probably feel perfectly at ease watching the small fry fight over table scraps. Here’s hoping they’re in for a surprise, though: shake-ups like this are good for pretty much everybody. Some details from RideAustin — rides will start at a modest $1.50 meter drop and $0.25 per minute, with service expanding as long as users can expect a good experience. The surge pricing thing they’re “working on.” When asked about funding, a representative wrote that “the Austin tech community has put in ~$4m so far in technology and in kind.”
|
Netflix touts Disney partnership to remind U.S. subscribers it still has movies
|
Lora Kolodny
| 2,016
| 5
| 23
|
In the midst of a quest to become an , has been drawing criticism from core, U.S. customers who say its film and TV library — beyond original series and films — isn’t robust enough to whet their viewing appetites. Today, seemingly tailored for these restless U.S. customers, touting a pay TV partnership with Disney, along with forthcoming titles that it will stream first or exclusively throughout the summer. The partnership with Disney allows Netflix, as of September, to offer Disney, Marvel, Pixar and Lucasfilm movies during the same window when cable networks like HBO would be allowed to air them, but after Blu-ray and DVD releases. In its video blog post attributed to Netflix Chief Content Officer , the company presented a montage of customers’ complaint-tweets begging Netflix for more “scary, Korean, Mickey Mouse, gay, Nicholas Sparks” and other movies as well as new TV shows. Sarandos and Netflix then highlighted newer and classic films that will be available in its catalog this summer, including: The Big Short, Hotel Transylvania 2, Spotlight, Goosebumps, the Back to the Future trilogy, the Lethal Weapon franchise, Sixteen Candles and The Wedding Planner. Netflix also reminded users of a spate of forthcoming originals that had already been announced, including a Brad Pitt-starrer, , and Christopher Guest-directed . The company isn’t just appealing to its existing customer base ahead of the Memorial Day weekend, when viewership on Netflix spikes. It is also telling a story of an improved content offering to . The strategy seems to be working for now. UBS and RBC Capital issued positive buy or hold ratings on Netflix today, the Street.com reported. rose by 2.59 percent from $92.89 on the previous day’s close to$94.89. Around the same day last year (May 22, 2015) Netflix shares closed at $88.84.
|
Google and Oracle present closing arguments in battle over Java
|
Kate Conger
| 2,016
| 5
| 23
|
Attorneys for Oracle and Google presented their closing arguments today in a lawsuit over Google’s use of Java APIs owned by Oracle in Android. Oracle accused Google of stealing a collection of APIs, while Google suggested that Android transformed the smartphone market and Oracle sued out of desperation when its own smartphone attempts failed to launch. The case is expected to have sprawling impacts on the software industry. If the jury finds that Google did indeed steal code from Oracle, it could disturb the way engineers at small startups build their products and expose them to litigation from major companies whose programming languages they use. Before sending the jurors home last week, presiding Judge William Aslup joked that they should not look up what an API is online over the weekend. It was a lighthearted instruction meant to caution jurors against doing their own research in the case, but struck at a fear that’s probably plaguing both legal teams — what if the jury still doesn’t understand the technology at the heart of the case? At issue in Oracle’s lawsuit is whether or not Google’s implementation of 37 Java APIs in Android was fair use. Google has argued that Sun Microsystems, which created Java, always intended for its programming language and accompanying APIs to be used freely. Oracle purchased Sun in 2010 and claimed that Sun executives believed Google had infringed their intellectual property and simply hadn’t brought legal action. An appeals court has already decided that the Java APIs in question are copyrightable. This case, which has stretched over two weeks in a district court in San Francisco, aims to determine whether Google’s implementation of the APIs can be considered fair use. Beginning this afternoon, the jury will consider several factors — most importantly, whether Google transformed Oracle’s code when it built Android, and whether the introduction of Android harmed Oracle’s business. Before the two tech titans can clearly argue whether Google’s use of the APIs was fair, they need to agree on how to explain APIs to their lay audience in the jury box — and they haven’t done that. Even as Oracle and Google’s legal teams laid out their final arguments today, they bickered over how best to describe an API. Google’s witnesses and lawyers offered a litany of explanations for APIs. Google attorneys from the first round of Oracle v. Google, in which they compared the packages, classes and methods contained within the Java API library as cabinets, drawers and individual manila files. Other witnesses for Google entertained their own comparisons: Jonathan Schwartz, the former CEO of Sun, explained APIs by comparing them to hamburgers. Many restaurants have the word “hamburger” on their menu, he said, but the recipes — in the world of APIs, the implementations — are unique. Other witnesses sought to compare APIs to such ubiquitous items like wall outlets and the gas pedals of cars. No matter the comparison, the point was the same: Google never expected that its use of something so common would become so contested. In a bid to portray APIs as a creative endeavor worthy of strong copyright protection, Oracle’s lead attorney, Peter Bicks, compared them to Harry Potter novels, saying the packages, classes and methods could be understood as the series, books and chapters. “Why are we looking at Harry Potter?” Google’s lawyer Robert Van Nest fired back during his closing argument. “This isn’t about Harry Potter. This is not a novel; it’s not a book. They want to talk about Harry Potter rather than what the labels do.” It’s not clear whether the jumble of popular novels and lunch items clarified APIs for the jurors or merely confused them. But it’s obvious that everyone else in the courtroom, from the attorneys to the judge, is concerned that the jurors won’t understand what APIs are or how they work — in a rare moment of agreement, Oracle and Google attorneys allowed the jurors to take their notebooks home over the weekend so they could study up. In his closing remarks, Bicks argued that Java formed the foundation of the smartphone market before the introduction of Android. Google engineers faced immense pressure to rush Android to market, in Bicks’ telling, and they took shortcuts to get there, which led to them ripping off the 37 Java APIs. “This is what this case is about: a company that believes it is immune to copyright laws,” Bicks said of Google, adding, “You don’t take people’s property without permission and use it for your own benefit.” Bicks staked his case on several embarrassing internal emails between top Google employees. He revisited one 2010 exchange that Oracle has often referenced as a smoking gun, in which Google engineer Tim Lindholm told Android team leader Andy Rubin that the alternatives to Java “all suck” and noted, “We conclude that we need to negotiate a license for Java.” Another email Rubin received from a team member fretted that Android hadn’t created a strong enough competitor to Java’s class libraries. “Ours are half-ass at best,” Google engineer Chris Desalvo wrote. “We need another half of an ass.” Bicks argued that the internal messages show Google didn’t believe that its use of the Java APIs was fair or legal, but that the company’s engineers moved forward anyway out of sheer desperation. In doing so, Bicks said Google devastated Oracle’s market. “Java was there first,” he said over and over, emphasizing the use of Java in feature phone operating systems like SavaJe and Danger and claiming that, prior to the introduction of Android to the market in 2008, almost all smartphones were running some form of Java. (The iPhone, which runs on Objective-C and was introduced in 2007, is a notable exception.) Not only had Java cornered the market, Bicks claimed, Android wasn’t as radically different as Google claimed. He presented a side-by-side comparison of the HTC Touch Pro, which ran Java, and the HTC Dream, which ran Android, as proof — and there’s no denying that the two phones look remarkably similar. Bicks said that once Google offered Android as a free and open source operating system, Oracle’s options for licensing Java were slashed. Their market crumbled, Bicks said, citing testimony from Oracle co-CEO Safra Catz in which she claimed she gave Amazon a to license Java in order to prevent the retailer from building its Paperwhite reader on Android. As he rolled through the of fair use, Bicks kept returning to a graphic showing the scales of justice. As he discussed each measure, it slowly descended into Oracle’s side of the scale, tipping justice ever further in Oracle’s favor. Oracle slide shown to jury during closing arguments. Near the end of his presentation, Bicks showed a slide of the form the jury will use to indicate whether it has ruled in favor of Oracle or Google, with a bright red X marking Oracle as the victor. “It takes somebody with strength and courage to stand up to somebody like Google, and that’s what Oracle has done,” Bicks said. During his closing argument, Google’s Van Nest characterized Oracle as a sore loser in the battle for corporate dominance. Android took over the smartphone market because it was a superior product to Java phones, not because it used the 37 Java APIs in question, he said. “Android is exactly the kind of thing that the fair use doctrine was intended to protect,” Van Nest told the jury. He pointed out that Android transformed Java SE for use in smartphones when it had traditionally been used only in desktop computers and servers, and noted that, although Oracle made several attempts of its own to develop a smartphone with Java SE, they all failed. (It’s worth mentioning here that I worked briefly as a contractor with Google prior to joining TechCrunch, although my work was not related to Android and I had no contact with the Android team.) Van Nest claimed Oracle was preoccupied with the so-called feature phone market while Google was leaping ahead to the smartphone era, creating a product Oracle couldn’t have imagined or built on its own. Android changed everything, Van Nest argued. However, he claimed that Android’s dominance in the smartphone market had a positive effect on Oracle’s business by keeping Java relevant to the modern developer community. “The whole market has changed and you haven’t changed with it,” Van Nest said. “Android is the number one thing keeping Java out there, doing as well as it is.” Sun and Google executives both understood that Google’s implementation of Java in Android constituted fair use, years before Oracle finalized its purchase of Sun in 2010, according to Van Nest. He claimed that, even after Oracle took over Sun, it did not target Google immediately and in fact welcomed Android as a beneficial addition to the industry. The Java APIs were always intended to be used freely by anyone, Van Nest insisted, because doing so would promote the growth and popularity of Java. “Oracle had no investment, none of the risk. Now they want all the credit and a whole lot of money. That’s not fair,” Van Nest said. Van Nest also emphasized that Android engineers had only reimplemented a sliver of Java’s code rather than copying from it liberally. They took very little and radically altered what they did take. Despite the internal Google emails harped on by Oracle, Van Nest said that the company never imagined it was infringing on Oracle’s intellectual property — and adamantly denied any infringement when Oracle finally brought it up in the summer of 2010. “We will not pay for code that we are not using, or license IP that we strongly believe that we are not violating, and that you refuse to enumerate,” a former Google computer scientist, Alan Eustace, wrote in a June 2010 email to Catz. Oracle sued two months later. In closing, Van Nest attempted to appeal to the jury’s Bay Area roots by highlighting the tech industry’s history in the area. “We are number one in the world on innovation,” Van Nest said in reference to Northern California. Android, he added, “is the kind of innovation that comes along once in a lifetime.” The jurors will consider the case this week. Whatever their verdict, the case will probably be appealed — with $9 billion on the line, neither side is likely to go down without a fight. However, Oracle declined to comment when asked if it would appeal. Google did not return a request for comment.
|
Facebook ditches Bing, 800M users now see its own AI text translations
|
Josh Constine
| 2,016
| 5
| 23
|
Machine learning is accomplishing Facebook’s mission of connecting the world across language barriers. Facebook is now serving 2 billion text translations per day. Facebook can translate across 40 languages in 1,800 directions, like French to English. And 800 million users, almost half of all Facebook users, see translations each month. That’s all based on Facebook’s own machine learning translation system. In 2011 it started working with Microsoft Bing to power translations, but has since bene working to transition to its own system. In December 2015, Facebook finally completed the shift, and now exclusively uses its own translation tech. Tap the “See Translation” button on Facebook posts, and machine learning AI will instantly show you the foreign text in your language Alan Packer, Facebook’s director of engineering for language technology, revealed this progress today at conference in San Francisco. The conference has a big focus on artificial intelligence, machine learning and other cutting-edge ways to parse data. [Update: After his talk, I sat down with Packer, who told me about Facebook ditching Bing-powered translation. This article has been updated to add info from him.] Earlier, Pinterest’s head of product Jack Chou revealed that just six months after launching its visual search feature, Pinterest sees 130 million visual searches every month. The product was built by a small team of four, and allows people to search using a source image instead of just text. Pinterest also now has 50 million buyable pins from 20 million merchants. Facebook’s ability to not only translate but understand the content of text and images could lead to big advances in the relevancy of the News Feed. Packer explained that if Facebook can understand a post is asking for recommendations of hotels in Paris, it could surface that to friends it knows recently visited Paris, suggest a particular friend to ask or recommend making a related search for public posts of recommendations. Facebook was an early pioneer of online translation, building a crowdsourcing tool to get users around the world to translate its interface’s text into their local tongues. In 2011, Facebook to translate users’ posts and comments in the News Feed. Last year, , a startup using understanding of natural language in text and voice to power new user interfaces. Google and Microsoft/Skype have also been to unite the world across borders. Alan Packer, Facebook’s Director Of Engineering for language technology, Facebook initially turned to Bing because “we didn’t have our own technology but saw that there was value in it. We did a deal, turned it on, and got a lot of usage” Packer tells me. The problem was that Bing was built to translate more properly written website text, not the way humans talk to each other. Packer says Bing “didn’t do well on slang, idioms, and metaphors. We really needed to train on our own data.” So Facebook looked at the languages most in need of translation, and got cranking on building a version of the tech that did better than Microsoft. “We did our own internal bake-off. When we could show it was better than Bing [for a specific language to language translation], we would turn Bing off and replace it with our own service. Now, the translation is fully rolled out for 1800 different translation permutations. When Facebook is confident its translation is perfect, it will automatically show the translation by default with an option to “See Original”, and only shows the opt-in “See Translation” button when it thinks it might have errors. Packer tells me other Facebook teams from anti-spam and policy enforcement to acquisitions like Instagram are now considering how they could integrate translation. Facebook’s post translation feature has made significant progress and found a huge audience since this screenshot from 2011 The motive is obvious, socially conscious and lucrative. Facebook’s mission is to make the world more open and connected. I asked Packer how translation plays into that, and he said, “The mission of the translation team is removing language as a barrier to making the world more open and connected.” While he didn’t have concrete numbers, Packer says that access to the translation product leads users to “have more friends, more friends of friends, and get exposed to more concepts and cultures.” And the company knows it’s grown important to users, because “when we turned it off for some people, they went nuts!” The more people across the world that Facebook users can connect with, the longer they’ll spend on the social network, and the more revenue-earning ads they’ll see. We’re rapidly approaching an era of the AI haves and have-nots. Tech giants who don’t have the engineering prowess to parse the meaning of their content or information won’t be able to deliver it to users as effectively. Companies like Google, Facebook and Microsoft could flourish while others more strapped for cash and resources to invest in research stumble.
|
Snapchat is raising more money around $20 billion valuation
|
Ingrid Lunden
| 2,016
| 5
| 23
|
may have first made its name in the crowded world of mobile apps with an ephemeral messaging service, but the startup and its wildly popular app are not disappearing anywhere soon. TechCrunch has learned from multiple sources that Snapchat is raising yet more financing at around a $20 billion valuation. Sources with knowledge of the deal say the social media giant is in the process of a round of about $200 million. This new financing, we understand, is a follow-on to the . Snapchat was said to be valued at , flat on the year before. However, filings from earlier this month and embedded below, uncovered for us by market analysts , show that the Series F was expanded. Based on a share price of $30.72 per share — which VC Experts tells us was the value disclosed in an earlier Fidelity Fund filing related to its Snapchat investment — and assuming all of the authorized shares are issued, the more recent valuation could be as high as $22.7 billion. Authorized shares do not always all convert to issued shares, but this gives us a range that fits in with what we’ve heard about the $20 billion valuation. Expanding the Series F with a Series FP, as it’s described in the document below, would also fit in with a description we’ve heard more than once about Snapchat’s fundraising: The startup is “always raising” on a “rolling” basis, partly because investors are so interested. “They get offers all the time,” one investor close to the company said. “And once you start to grow on this path, many people come to give you money. You don’t know how to value the company, so the best way to do that is to do some kind of rolling funding. When you have a hot company and many people are approaching you, you do a market of discovery.” That may be different from other startups, but in a way it reflects Snapchat’s own fast growth and its taste for trying out new things like QR codes to connect to accounts and content, their crazy face-changing filters and more. Besides Fidelity, other existing in the company include Alibaba, which led its Series E; Benchmark (Series A lead); Coatue Management (Series C lead); General Catalyst; IVP (Series B lead); Saudi investment group Kingdom Holding Company; KPCB (series D lead); Lightspeed (Snapchat’s earliest and most constant investor); SV Angel; WeChat owner Tencent and Yahoo. We hear that many existing investors are looking to participate in this new round, including Spark Capital. Snapchat declined to comment on the newer fundraising. Sources close to the company confirmed that a previous round had already closed earlier this year at a valuation different from the $20 billion we’ve been hearing. Snapchat, based out of Los Angeles, has had since launching five years ago. The startup says it has of its photo and messaging service. While there is no question that Snapchat is gaining traction, particularly with the millennial demographic, the hefty valuation can be risky. The greater the valuation, the less likely companies could afford to acquire Snapchat. It also puts pressure on an eventual IPO, with the expectation that the company’s market cap will be higher. Tumultuous tech stocks and other unknown factors resulted in Fidelity But that didn’t prevent the investor from buying more shares this spring. Those who are bullish on the company believe the high valuation will bear out. The company, said a source, is thinking “two or three generations out” in terms of its growth and what it will tackle next, whether that is more international markets, a wider range of demographics, new kinds of advertising or other paid services or new products altogether. One source claimed that hardware is in the company’s sights, which is this has come up. It is also possible that it will continue to make acquisitions to fuel growth. for $100 million in recent months. The new round would bring Snapchat’s total funding to over $1.5 billion. [scribd id=313598967 key=key-Lf9uZbAiHXGMcUeFdQJZ mode=scroll]
|
Facebook denies bias in Trending Topics, but vows changes anyway
|
Devin Coldewey
| 2,016
| 5
| 23
|
Facebook responded today to official queries regarding its Trending Topics feature, specifically allegations made over the last few weeks that the team responsible for it was deliberately suppressing conservative views and arbitrarily elevating stories with little oversight. In a issued publicly and in a sent directly to Senator John Thune (R-SD), Facebook denied the allegations, but nevertheless announced a number of changes to internal processes that should help appease critics. Our investigation has revealed no evidence of systematic political bias in the selection or prominence of stories included in the Trending Topics feature. In fact, our analysis indicated that the rates of approval of conservative and liberal topics are virtually identical in Trending Topics. At the same time, as you would expect with an inquiry of this nature, our investigation could not exclude the possibility of isolated improper actions or unintentional bias in the implementation of our guidelines or policies. Specifically, Facebook looked at how and when stories were “boosted,” “blacklisted,” or submitted to “injection,” or correction — and found that rates “have been virtually identical for liberal and conservative topics.” Part of the problem, as described by the anonymous ex-Trending Topics curators whose testimony informed the accusations, was a lack of oversight — and plain bad management. Among the changes to the program are “additional controls and oversight around the review team” — and presumably further oversight of the oversight team itself, since watching the watchers is highly important in a situation like this, with an editorial team encapsulated within a decidedly non-editorial environment. In addition to the extra red tape, the process of finding and selecting Trending Topics items is getting a makeover: If you were curious about the exact process by which a story goes from hashtag or local news to Trending Topic, ; it contains lots of previously unknown details, though many will now be obsolete. Specific allegations of bias — for instance, that stories about Glenn Beck (who on his own meeting with Facebook on this topic) were suppressed — are also addressed. Senator Thune today as well, praising Facebook’s handling of the issue but at the same time getting a couple jabs in. The seriousness with which Facebook has treated these allegations and its desire to serve as an open platform for all viewpoints is evident and encouraging and I look forward to the company’s actions meeting its public rhetoric. Facebook’s description of the methodology it uses for determining the trending content it highlights for users is far different from and more detailed than what it offered prior to our questions. In other words: why did it take a major potential scandal for Facebook to offer details on how this high-visibility feature worked? A FAQ on the topic might have avoided the controversy altogether. No further action appears to be necessary, Sen. Thune concluded: “While the [Senate Commerce] committee remains open to new information on this matter, transparency – not regulation – remains the goal.”
|
Tim Cook admits ‘prices are high’ for iPhones in India
|
Devin Coldewey
| 2,016
| 5
| 23
|
Apple CEO made an uncharacteristic admission today in an — that iPhones are too expensive. He immediately qualified it, and the context really is important, but it’s just one of those things you don’t hear very often. found recently that India is among the most expensive places in the world to buy an iPhone, with prices averaging 31 percent higher than the U.S. — only Sweden, Indonesia and Brazil have it worse. After some opening softballs (or rather, cricket balls), Cook faced some serious grilling on this topic from NDTV’s Vikram Chandra. “You’ve got an iPhone here which is more expensive than it is in the U.S., with less functionality than it would have in the U.S., and in a country where purchasing power is a fraction of what it is in the U.S.,” Chandra said. Cook acknowledged the truth of the statements, but caught himself before going too far. “The duties and the taxes and the compounding of those takes the price and it makes it very high. our profitability is less in India, it’s materially less — but still I recognize that prices are high.” “We want to do things that lower that over time, to the degree that we can,” he said. “I want the consumer in India to be able to buy at a price that looks like the U.S. price.” He didn’t detail much in the way of concrete steps to this end, however, saying the company was “looking at India holistically” before getting into retail or fiddling with carrier relationships. And it’s worth noting that he did not say the iPhone was overpriced, merely too expensive. They’re different things, not that it matters to someone who has to pay $850 for a base-level iPhone 6. “What we see here is talent,” he said, in response to questions on how Apple would be investing in its Indian presence. “That means on iOS. We’re also using a lot of skills in India for maps… will be several hundred million dollars worth of work.” But the iPhone wouldn’t be getting a special, localized version at lower cost or with highly customized services. “We want to bring Apple Pay to India. We want to bring every service that we do to India — every one,” he said. “And if there’s something unique that’s needed, we also want to do that.” But not more than that: “I don’t believe personally in trying to be something you’re not. We are what we are. We’re a California company.”
|
How to get a show on TechCrunch
|
Khaled "Tito" Hamze
| 2,016
| 5
| 23
|
I’m pretty excited and happy to announce that today I’m taking over Crunch Report. I edited over 150 episodes of Crunch Report back when Sarah Lane hosted the show and I learned a lot: how to be super efficient at editing, learning what I personally liked and disliked about Crunch Report and basically eating, breathing and living this show. Since Sarah left in March, there has been an awesome rotation of TC writers hosting the show: , , and myself (not a writer). We were all temporarily filling in until there was someone permanent to fill the spot. Well, after a couple of months, I’m pretty pumped to let you know that I’m the one who’s going to be raging on Crunch Report. Lucky you. So, on top of editing, sourcing material, shooting the show and distributing it to all the different platforms TechCrunch is on, I will also be writing and hosting it. It’s a pretty big task, but it is one that I know I am prepared for and one that I know, no matter what, I’m going to have a fun time doing. So, instead of just telling you that I’m the new host, I thought I’d tell you my journey to this point and maybe it can help with something you want to do, like start a show on TechCrunch. A good starting point is to be entirely obsessed with TechCrunch, like stalker-status obsessed. Like, you remember that girl when you were in high school you thought about everyday? Yeah? Well, TechCrunch should get you that excited and not just in the pantaloons, you dirty dog. I have been a fan of TechCrunch since around 2010. I read it all the time. It was my go-to for technology news. Of course, there were other publications, but TC was always the first one on my list to get updates. I consumed everything TC covered. Then, one day in 2012 there was a . I sent in some basic information and a one-minute video pitch to apply. I received a free ticket to experience Disrupt and, most importantly, meet some TechCrunch people. ‘s on-stage interviews captivated me. Seeing and in person, or the many other talented TC writers in their TC track jackets with their Twitter handles on their backs was like spotting celebrities. I was like a freaking kid in a freaking candy store! My badge from the first TC Disrupt I ever attended in SF 2012. That experience was a catalyst to get me involved in the startup community. I learned about lean startup methodology and actually built some of my own businesses. To this day, I read TechCrunch almost every day by choice — and, as a bonus, I get paid for it. Create things. Be it writing, videos, photography, Snapchats or sandwiches. Create as many things as you can, as often as you can, and aim to try to make whatever you are working on better than the one before it. Constantly be creating. Try to one-up yourself. Allow other creators to influence you. Learn from them and then apply it. Look at ideas you can “exploit.” By that I mean that if you see something that’s working, take advantage of it and optimize it as best you can. Sometimes you really must create opportunities for yourself. During the VC apprentice competition, I met the COO of TechCrunch . I stayed in touch with him even if it was just to say, “Hi. How’s it going?”, to show him the latest video I created or to ask him if I could get a hook-up on a Disrupt ticket. I tried to remain on his radar and be available for anything TechCrunch might need. The first opportunity I had was freelance video work where I met the TC video team. At that time the team consisted of John Murillo, and . I tried to be as helpful as I possibly could be. Then, after that I remained in touch. My big “foot in the door” came when they hired as an executive producer and good ol’ Ned, bless his soul, put my name in as a recommendation for a shooter/editor position. From then on it has been a montage: , , , and being creative, which for me is the most important. The first time I edited Crunch Report, I was sweating profusely. My signature sweaty pits were on point that day and I think I was having an internal anxiety attack. It is not the easiest of edits. You have to source stories as they come in from 11 a.m. – 2 p.m., shoot it at 2 p.m., then have it completely finished, turned around and edited by 3:30 p.m. That allows enough time to export and upload to all platforms ready to be viewed by the TC audience at 4 p.m. It’s one mofo of an edit. was the first show of which I had absolute ownership and I pulled out all the stops. I put in loads of planning and pre-production on my own time and I even traded a to a friend so he would create graphics for me. John Murillo and Joe Zolnoski were kind enough to lend me some of their time. We did the early in the morning before other shoots and sent the link to the higher-ups. They liked it and started airing it. It wasn’t something that was expected of me. I simply had the chance to make something I really wanted to do and I just did it. I didn’t talk about it (or even ask permission). I just went and made it happen. The culture at TC is very much a “get shit done” culture, which I very much thrive in. As for Crunch Report, I wasn’t even in the lineup to host, but I really wanted to do it because I knew the “backend” (workflow, editing) so well that I really wanted to have fun with the “front end” (shooting, hosting). During a team meeting, , the editor-in-chief of TechCrunch walked by and here was my chance. I blurted out in the middle of a video team meeting, “Can I host a Crunch Report?” Not sure what the outcome would be he replied, “yeah go for it.” I had all I needed to get started and I shot it that following Thursday. It was very trying the first time. It reminded me of the first time I edited Crunch Report: a little anxiety and a LOT of sweaty pits. However, I had a blast with it and the response was overwhelmingly positive, so here we are now. This is that first episode. Taking on Crunch Report means being 100 percent responsible for it and not working on any other shows, so I wasn’t sure exactly how I felt about it at first. However, after talking to many people inside and outside of TechCrunch, I am extremely excited what the future of Crunch Report holds. I vow to do my best to get you some tech news that is fun, entertaining and educational. I’ll never be able to fill Sarah Lane’s shoes, but now it’s my turn and I plan on delivering one hell of a show. Just a quick shout out to the people on the video team past and present who were/are behind the scenes on this show. John Murillo, Yashad Kulkarni, , Felicia Williams and <3 all. Also, to the rest of the people inside and outside of TechCrunch who have been supportive, it is all deeply appreciated.
|
Ex-Twitter VP Brian “Skip” Schipper lands at Yext
|
Katie Roof
| 2,016
| 5
| 23
|
Schipper will serve as Chief People Officer, overseeing global HR for New York-based Yext, and will lead the expansion of the company’s global employees as it moves toward an IPO. He previously held the same title at Groupon. “Yext is just getting started,” said Schipper. He hopes “to help build on a terrific reputation that Yext already has as a great company to work for.” Yext powers the search data for more than 600,000 locations worldwide. If someone Googles a brand, Yext provides the information on their nearby stores that shows up in search results. Clients include Sephora, FedEx and Sunglass Hut. Yext also saw its workforce grow by 38 percent last year, and currently has more than 500 employees. The company is headquartered at 1 Madison Avenue near New York’s Madison Square Park.
|
Lyft will now let you schedule trips ahead of time
|
Sarah Buhr
| 2,016
| 5
| 23
|
Ridesharing platform heard the people and the people wanted to schedule rides ahead of time. Starting today, Lyft is adding a new feature to let you schedule in advance that ride to the airport or crucial business meeting for which you need to be on time. , a ridesharing service in New York, has offered this feature for more than a year and we’ve heard rumors has toyed with the idea of scheduling rides ahead of time, but so far does not offer that option. It’s unclear how the service will be affected by surge pricing during high-traffic times. It’s also unclear how popular the feature will be with Lyft riders. We asked how much of a demand this was on Lyft’s end and a spokesperson simply added: “Passengers have asked us for this feature, so we want to roll this out to give additional peace of mind to our users when they have specific trips coming up — like to the airport. People like the security of knowing they already have a ride scheduled.” I know I’ve personally wanted this option for some time and it would be nice to know a ride is for sure coming at the time I needed to be picked up. The option works by selecting a pickup location, tapping a clock icon on the right of the Lyft app then selecting the time you want to be picked up. You can update or cancel a scheduled ride up to 30 minutes before pickup without getting charged. The scheduling feature is only available in San Francisco for now. Lyft will test how it goes and start rolling the feature out from there. Is this a foreshadowing of our driverless car, chatbot, smart home future? Imagine asking Alexa to schedule you a self-driving car during pre-assessed low-traffic intervals to get you wherever you want to go on time.
|
We need more driverless car accidents
|
Shawn DuBravac
| 2,016
| 5
| 23
|
A driverless car was involved in a traffic accident on a California city street earlier this year. No one was hurt in the small fender bender, but the accident does signal we are making incredible leaps forward on the road toward driverless cars. It may sound counterintuitive, but this crash shows just how far autonomous technology has come in such a short time. This wasn’t really an accident in the traditional sense — intentional software changes implemented just weeks earlier were likely a contributing factor. With any luck, we will continue to accelerate real-world experimentation and the possibility of more accidents to come. The incident was by a subtle software update, implemented a few weeks prior in all of Google’s autonomous cars, that enabled them to “hug the rightmost side of the lane,” a common social norm that allows other drivers to pass on the left. According to the , the Google autonomous vehicle was shifting within its lane to bypass an obstacle in its path when it made contact with a bus approaching from behind. The car was traveling slower than two miles per hour at the time of impact — the bus, about 15 mph. The fact that Google was testing this new behavior to hug the right side of the lane shows that the technology has developed beyond simply following the rules of the road, but actually driving more “human like” — in line with the social elements of driving. Accidents like this are vital learning exercises. Google’s driverless vehicles cover more than , in addition to the three million miles of computer-simulated driving taking place daily. But these real-life tests are crucial. Driverless cars won’t just change who or what is behind the wheel — they hold the potential to change where we drive, which in turn can change our commutes, vacation plans and how we connect with family and friends. Driverless cars are positioned to forever change the world in which we live… but before they can do that, they must fundamentally and fully understand that world. And the only way to fully understand our world is to explore it by taking risks. The history of innovation is built upon pushing the frontier of what has been done before. From Lindbergh’s nonstop solo flight across the Atlantic in 1927 to Chuck Yeager breaking the sound barrier in 1947, we have long been demystifying the unknown in the name of progress. While shifting within the same lane may seem like a minute detail, autonomous car developers, like Lindbergh and Yeager before them, are fundamentally changing transportation. Subtle changes in technology are what will propel us forward. From air bags to automatic windshield wipers, on-board diagnostics to collision avoidance systems, our cars look than they did 30 years ago, thanks to incremental innovation over a long period of time. Innovation builds exponentially, but requires risk. The first seven U.S. astronauts were military test pilots. When they were selected in 1959, no one had come close to leaving the earth’s atmosphere. And at the time, no one knew whether any of these seven men would be successful. These innovators pushed the frontier of what we knew to be possible. Back then we were pushing for discovery and exploration, but we were also in a space race with our Cold War rival, the Soviet Union. Countless Americans risked their lives, and some of the very first Americans in space gave the ultimate sacrifice, to win that race. Make no mistake: Today, we find ourselves in an even more important race. More than people are killed annually in traffic deaths globally, and almost all of these deaths are caused by human error. Autonomous vehicles hold the greatest promise to eradicating one of the most deadly forces on earth, but first we must push the limits. Autonomous vehicles need to learn to be aggressive to the right degree and in the right ways. Much of this can be determined within computer simulations, but some of it must be determined on open roads, where obstacles are dynamic and complex. There will be accidents along the way. And from these incidents, we will gain treasure troves of intelligence that will push us further along the innovation curve. Hopefully, we haven’t seen our last autonomous-vehicle accident; hopefully everyone will see them for what they are if/when they occur: invaluable steps pushing us across uncrossed frontiers.
|
Zmodo’s Pivot gives you 24/7 monitoring on the cheap
|
John Biggs
| 2,016
| 5
| 23
|
We’ve come a long way from the days of pale beige IP webcams with wonky interfaces and limited recording times. Today’s webcams, like the , are motion-sensing, low-light seeing, and notification-rich cylinders that hide out in secret places and store video for days at a time. In essence webcams have morphed from wonky toys to actual tools and, in the process, have gone down in price. systems including the Pivot, the Greet, and the Torch Pro. The Pivot is their home security product that works much like Canary or Dropcam. It’s basically a black cylinder with a low-light camera built-in and a pivoting head that lets you move from side to side and even pan to look at a door or window when you connect special sensors. The system can record footage to the 16GB internal memory and you can access the video from anywhere in the world which means you don’t have to spend anything on monthly video hosting fees. Best of all, at $149, this is one of the cheapest yet feature-rich webcams I’ve seen. It’s a clever solution at the right price. [gallery ids="1326395,1326396,1326397"] The system is straightforward. You pull it out of the box, connect it your WiFi via the included app, and watch your house. It includes two door or window sensors that will make the Pivot turn towards whatever just opened which means you can have it trained on a central location until someone comes in and then have it follow that person as they enter the house. It’s a lot of fun in theory but it presumes the sort of wide vantage point available to folks with bigger houses and, presumably, front and back doors. The quality of the video, as shown above, is solid. There is an obvious fisheye effect thanks to the wide-angle lens but that’s to be expected. Further, the night vision mode works well but can often get confused if you’re viewing a room with one light in it. The system defaults to low-light mode when there are no lights shining directly in the frame so half of the room is black and white and the other half color on some evenings. Neither of these are showstoppers. The system notifies you when it senses a door open or movement along its field of view. You can change notification settings in the app. The best thing, however, is that the Pivot stores video right to the camera. This means that you don’t have to trust a third party with your video and you have complete control over it. The company is working on some cloud solutions but the built-in memory is great. The system also includes a Bluetooth connection so you can turn it into a Bluetooth speaker as well as temperature and humidity readings. I’ve used a number of IP webcams over the years and they’ve improved immensely since we first installed a pan and tilt model that scared our dog when we used to activate it remotely. The Pivot is silent, usable, and offers an interesting take on video surveillance that doesn’t depend much on the cloud. While it’s not as richly featured as Dropcam it is a nice, simple solution and the door sensors are a clever addition to the standard webcam model. It’s not absolutely perfect but it’s good enough for use in situations where you want to keep an eye on a pet while you’re home or want a notification when someone comes through the door or window. Best of all, and I’m just spitballing here, but the Pivot looks just like the Amazon Echo so you can put the Pivot and Alexa together and have them mate, creating a family of lipstick-tube sized webcam AIs that can populate the world. That, friends, is a dream I can get behind. [gallery ids="1325980,1325983,1325982"]
|
Facebook enables Continuous Live Video to power puppycams and more
|
Josh Constine
| 2,016
| 5
| 23
|
Live mobile video is evolving beyond selfie-stream rants and citizen journalism. Facebook will now allow non-stop, long-form broadcasting as long as the creators don’t mind that they won’t be able to permanently save and share the video. The new Continuous Live Video API enables persistent streams like nature feeds, 24-hour windows into major landmarks or cameras trained on a pit full of puppies, Facebook revealed to me. This is just one way Facebook has attracted broadcasters to its . It had 12 partners when it launched at F8 in April, but has grown to more than 100 today. Instead of just streaming from their phones, the API lets more professional broadcasters use their own high-grade cameras, mixing boards and effects suites, plus control who sees their Live videos. The Continuous Live Video launch represents a breakthrough for Facebook’s engineering team. Previously, Live streams could only be up to 90 minutes. That means you couldn’t broadcast a whole conference, sporting event or party, let alone leave the camera running day and night. But now Facebook has figured it out. The only trade-off is that unlike normal Facebook Live stream, there’s no option to let people replay the stream later or rewind to earlier. That relieves Facebook from the server costs of having to host insanely long videos. Facebook’s head of video Fidji Simo tells me, “We’ve already seen some interesting use cases — for example, it was used by to power nature cameras — and we’re looking forward to seeing what Live API developers come up with in the future. We expect developers and publishers to get creative with this new capability.” Another new feature luring broadcasters to the Live API platform is geogating, which lets creators “access the same control and customization options we offer for regular videos,” Simo tells me. Geogating lets publishers make a video visible only to people in a particular location if that’s where it’s most relevant or they only have limited broadcast rights. They can also set a video to expire after a certain time if they want to achieve added urgency or it only made sense soon after an event happened. There’s also age-gating so only users over a certain age can see a video, which could be important to brands in restricted industries like alcohol who might want to use Live for marketing. Other options like on-screen graphics and multi-camera broadcasts that are now supported in the Live API have pulled in additional partners to Facebook’s platform. is answering questions about the NFL Draft, gave viewers Live looks at celebrities on the red carpet at the movie premiere of “Alice Through the Looking Glass,” and brought viewers inside the White house Correspondents’ Dinner. Facebook is racing to make Live the most robust real-time video platform available. Today we wrote about how Facebook is starting to show so you can see when the most exciting moments were and skip there. Facebook foresees Live as a huge part of the future of video, but it’s still trying to establish itself as the leader in the space after launching several months after Twitter’s Periscope. With a clear imperative from CEO Mark Zuckerberg and enormous resources, Facebook is quickly advancing the fledgling medium.
|
Amazon no longer offers price match refunds on anything but TVs
|
Sarah Perez
| 2,016
| 5
| 23
|
has quietly ended its price protection policy on all products except for televisions. The change to the company’s policy comes at a time when a handful of startups have launched to help consumers automate the process of requesting refunds when prices change on online sites, including Amazon and dozens of other e-commerce stores. For example, newcomer recently debuted a mobile app that helps consumers get their money back on purchases after price drops. Earny co-founder Oded Vakrat says that, so far, around 50 percent of the refund requests the app handled were for Amazon purchases. Earny also competes with , which offers a similar service both online and on mobile. Meanwhile, older sites like allow consumers to track Amazon price drops and receive alerts. Prior to this policy change, Amazon’s price protection policy was already one of the least friendly to consumers, as it used to provide seven days of price matching on price drops. That means if you purchased an item from Amazon which the company later marked down, you could request a refund. However, unlike many stores, Amazon only matched its own prices for items, not competitors’ pricing — with the exception of TVs and cell phones. In comparison, other stores have more pro-consumer policies, including Best Buy, which provides price matching during its return and exchange period ( ) and Walmart, which offers 90 days of protection, for example. Vakrat says his company noticed Amazon’s policy change a couple of days after the startup’s launch in early May — or, around May 7th or 8th, 2016. Initially, Amazon agents honored price-matching requests as usual, saying that if an item is shipped and sold by Amazon, the company has a seven-day price match from the time of delivery. For example, see the screenshot of the email below: But when the new policy went into effect, customer service agents instead said that “with the exception of TVs, Amazon.com doesn’t offer post-purchase adjustments.” also noticed the policy change around the same time, with some of them even being told by agents that Amazon never offered its prior price protection policy — that the refunds it issued in the past were an “exception.” Amazon’s website also now reflects the new policy, saying that: “Amazon.com consistently works toward maintaining competitive prices on everything we carry and will match the price of other retailers for some items. Amazon.com will price match eligible purchases of televisions with select other retailers. For all other items, Amazon.com doesn’t offer price matching.” The site then links to that explains how it will price match for TV purchases. As for how this change will impact startups like Earny and Paribus? Vakrat optimistically referred to this blow as a “great opportunity” to show why consumers need startups like Earny to have their back. Amazon insists that has not changed — it says that its prices are dynamic and that its customer service agents have made exceptions in the past, but that wasn’t the rule. In addition, Amazon wants to caution its customers that sharing their credentials with third-parties puts their accounts at risk. Amazon offered the following statement on the matter: Our customers expect to come to Amazon and find the lowest prices and we are obsessed with maintaining that customer trust. We work hard to find the best prices out there and match them for all customers every day. Further, we take customer security very seriously and want to remind them not to share their Amazon account credentials with anyone.
|
Spice up your Snapchats with Stickers, not just emojis
|
Fitz Tepper
| 2,016
| 5
| 23
|
Snapchat just rolled out an update that lets users decorate their Snaps with the 200+ stickers they released as . The new feature works exactly like emojis — they even live inside the same button on the camera screen. Just like the emoji feature you can drag, resize and reposition any sticker on your photos or videos. The stickers even work with Snapchat’s . While not technically a new feature, the update will certainly help users better communicate their thoughts and emotions via Snaps. Unlike emojis, the stickers aren’t sorted into different categories — they all are inside one long scroll list. And because this list doesn’t have the same sticker search capabilities that Chat has, it may be hard to find the exact sticker you want to use. Another potential issue is that some of the stickers are better suited to be sent in chat instead of adorned on a photo or video. While most users put emojis on their content as a decoration (like adding a hat or sunglasses to a selfie), these stickers will probably be used in a different way. Notably, this isn’t the only news to come out of Snapchat HQ today. This morning the company replaced all of its ultra-popular Lenses with nine different X-Men themed sponsored Lenses. Normally Snapchat will add sponsored Lenses to the front of the lineup, but still leaves non-sponsored ones (which are normally more popular with users). This is the first time they have ever totally removed the regular ones. While the X-Men ones are certainly well-made, they hamper the creativity of users who rely on the rest to express themselves to friends on the platform. The decision to remove the beloved Lenses , and was an irresponsible decision for a company that is trying to prove to users it won’t sacrifice its core experience for the benefit of corporate sponsors. And while this move will only last 24 hours, it says exactly the opposite — that short-term financial gains are more important than keeping users happy. The new sticker feature is live now with updates available in the iOS App Store and Google Play Store, and the X-Men Lenses are already live in the app, if you feel like checking them out.
|
Chat app Line begins offering delivery on-demand services
|
Jon Russell
| 2,016
| 5
| 15
|
The future of mobile messaging is pointing towards services, but don’t assume that only applies to digital offerings like money transfers and shopping. Japan’s Line, one of the world’s most used chat apps, has just made a move into on-demand services. The pilot move is initially in Thailand — one of Line’s strongest markets and a previous test bed for and streaming services — where its 30 million-plus users can make use of courier, food and grocery delivery services . Earlier this year and this is it. The Line Man service is powered by Lalamove, a logistics startup that has and will provide motorbike delivery staff wearing green jackets clad with . Line said that Thailand is the first market for the service, but it could expand to other parts of the world if it proves successful. While Line has over 218 million active users according to its most recent data, 69 percent of those are based in four countries: Japan, Thailand, Taiwan and Indonesia. The maturing of messaging markets means most countries already have their top apps which people use, and thus growing a userbase in a market where your chat app is not top dog becomes hugely challenging. Line’s prospects of growth in other parts of Asia — the continent where it is focusing its resources — are tough, but having a strong following in those four aforementioned markets does mean that services like Line Man have a decent shot at gaining traction where Line is popular. Beyond existing courier services, such as FoodPanda, Line is likely to be challenged by ride on-demand services. in Indonesia, where another on-demand startup Go-Jek already plays in that space in the country. , but it has yet to expand those to Asia — that could change in the future, though, given that . Uber Moto is currently in India, Thailand and Indonesia, but is likely to expand to other parts of Asia over time. As if that wasn’t enough crossover, in Japan since 2015, although it remains to be seen whether that initiative will expand overseas. Line has been linked with a public listing for the last two years. The Japanese company reportedly canceled dual U.S.-Japan listings in 2014 and 2015, and already this year. Showing growth is the name of the game for going public, and this move into services is part of the company’s efforts to be “more than just a messaging app” for its users.
|
Penny raises $1.2M in seed funding for its personal finance bot
|
Fitz Tepper
| 2,016
| 5
| 23
|
, a personal finance bot , has in seed funding from . As a refresher, the app offers a chat-based interface that offers advice tailored to your personal finances. This advice includes things like how much you spend on food each week, how this month’s spending compares to last month’s and even income graphs. One unique thing about Penny is that the app only lets you send pre-populated messages, not natural language requests. While we originally touted this as one of Penny’s downsides, the recent explosion of chat-based bots (most of which are terrible at parsing a user’s natural language requests) now helps explain why pre-populated messages may actually be beneficial to the user. The startup explains that they implemented this feature so the user isn’t forced “to figure out not only what they want to ask, but also how to ask it in a way the bot understands.” Essentially, the pre-populated options make it easier for a user to just talk to the bot, and not worry about what to say and how to say it. Since we looked at Penny last year, the startup has also rolled out new features to help people change their financial behavior. For example, the app now will help users decide if their gym membership is worth it, or compare your Amazon spending with the average person’s. As we enter the age of bots, it’s becoming clear that a messaging interface just isn’t the best way to do everything. Shopping is an example — many startups have pushed shopping bots, but until the technology progresses, than shopping through a traditional e-commerce site. On the other hand, finance could be an area where bots shine. No one likes to log into their bank’s account portal, and even when they do it’s difficult to parse and understand the itemized statements offered each month. A bot can interpret this information and simplify it for the user, which hopefully can lead to the average consumer being better at understanding their financial situation. Penny currently only lives in its own iOS- and Android-based mobile app. However, Mitchell Lee, co-founder of Penny, noted that they would definitely consider putting their bot on a platform like Facebook Messenger, but not yet — mainly because the startup can’t yet see a clear value proposition of offering Penny on other platforms. This is because the young platforms aren’t yet developed enough to fully support Penny’s key features (secure password entry, pre-populated responses, etc). The startup will use the funding to hire one or two people to deepen their expertise in machine learning, a critical (but often forgotten) part of developing a bot. It will also let them continue to experiment with figuring out how Penny will provide value to users, as the app currently doesn’t have any monetization components.
|
Didi Chuxing, China’s largest taxi on-demand firm, denies plan for U.S. IPO in 2017
|
Jon Russell
| 2,016
| 5
| 15
|
Didi Chuxing, the ride-sharing firm that leads Uber in China, has denied a report that it plans to go public in the U.S. next year. Just days after Didi announced , that the firm is eying an IPO in New York within the next 18 months. Bloomberg’s sources claim Didi, which is raising $3 billion more at a $26 billion valuation right now, hasn’t selected an exchange or banks at this point, and that a listing is subject to how it performs against Uber in China. Didi denied the claim, and added that it has no IPO plans at all, U.S.-based or otherwise. “We currently have no IPO plan, so there’s no point of talking about location or schedule,” a spokesperson said in a statement to TechCrunch. Reliable data for the on-demand ride industry is hard to get hold of, but analysts are united in agreement that Didi is ahead of Uber in China by some margin. The company claims to have 14 million drivers and 300 million active users across its range of services, which include private cars, peer-to-peer rides, chauffeuring, bus services and more. that China accounts for a number of its biggest cities worldwide, but it isn’t clear how many users or daily rides the service has in the country. There have been rumors of a Didi IPO before. Prior to the merger of China’s two largest Uber rivals — Didi Dache-Kuaidi Dache — which created the company, of his intent for a public listing with the U.S. a preferred location. However, much has happened since in early 2015, rather than fighting each other, the two companies have taken on Uber, which has , which is valued at $7 billion. There’s certainly a track record of U.S. IPOs from top tech companies in China — Alibaba, one of Didi’s most prominent shareholders, sits on the NYSE — but a dual U.S.-China listing would appear to make more sense for Didi when the time comes. Didi has played up its homegrown status in China since it represents a clear distinction to Uber and could help curry favor in the regulatory battles that are inevitable with the ride-sharing business. It would seem counter productive for it to then list only in China, given the Chinese government’s efforts to promote local tech companies. Indeed, put forward for Apple’s investment in Didi, which is certainly out of character for the U.S. company based on previous deals. Beyond needing to focus on its nationalism, a 2017 listing would see Didi beat Uber to going public — but that might not necessarily be a race worth winning. The market for tech IPOs has slowed down massively in the U.S., with . With Uber the de facto global leader (and pioneer) of the ride-sharing industry, it remains to be seen whether listing first could have a negative impact on Didi’s IPO. As The Information’s Amir Efrati pointed out, much of Didi’s value may be derived by its position compared to Uber. Letting the U.S. firm go first might educate the market, set expectations and provide a base for a stronger Didi listing. The Didi IPO talk seems premature. Would seem to give leverage to Uber, as Uber will determine Didi's financials for some time. — Amir Efrati (@amir)
|
Amazon will reportedly soon sell its own private-label groceries
|
Fitz Tepper
| 2,016
| 5
| 15
|
Amazon will soon roll out its own private-label brands of common household items like coffee, diapers, and other perishable groceries, according to a report The of perishable goods like baby food, tea, coffee, spices, and non-perishables like laundry detergent. These products will live under brand names like Happy Belly, Wickedly Prime, and Mama Bear. The launch is rumored to come as soon as later this month, and will only be available for purchase by Amazon Prime members. If the quality and price of the goods are competitive, these products could act as another reason for shoppers to sign up for the $99 per year membership. The e-commerce giant already sells some private-label goods under its line, which is mainly composed of consumer electronics devices like USB cords and disposable batteries. The company also multiple in-house clothing brands, further signaling that they are willing to diversify their private-label offerings. But private-label merchandising is hard. In 2014 Amazon its Element brand diapers due to a design flaw. With edible goods the stakes are only higher, and one slip up could tarnish the reputation of the e-commerce giant’s future private-label offerings. But, if Amazon is able to convince customers to switch away from their preferred brands in favor of Amazon’s private-label alternatives, they have the potential to dramatically increase profit margins on historically low-margin products like groceries.
|
null |
Frederic Lardinois
| 2,016
| 5
| 24
| null |
This VR photography demo is like Pokemon Snap for action sports
|
Fitz Tepper
| 2,016
| 5
| 15
|
Who remembers ? The game was released in 1999 for N64 as a rail “shooter”, and let you take pictures of Pokemon while riding through different courses. But while your path through each level was predetermined, the photographs were not. Meaning users had free control of the camera and absolute discretion in what they could photograph and which type of shot to use. Now, almost 20 years later, one VR game developer has brought this idea into the 21st century by making a photography demo for the HTC Vive. Made by Chicago-based game developer Robomoto, the aptly-titled “ ” puts you in the shoes of an action sports photographer stationed at the top of a halfpipe, while a skateboarder skates from side to side. While the default view just makes it feel like you are standing there in VR, you have the ability to “raise” the camera and enter a viewfinder mode, which lets you snap shots of the action. Interestingly, the VR environment goes from 3D to 2D when a user raises the viewfinder, something that the developers said took a great deal of fine tuning and work. The developer that the demo is part of a broader goal of using VR to “relive memories” instead of just play games. It’s not hard to imagine the simulation genre becoming a mainstay of VR gaming, whether it is photography games, flight simulators, or anything else that lets you experience what it’s like to do someone else’s job that is much cooler than yours. Currently, the experience only exists as an internal prototype, but the team said they are looking at various options to release it to VR audiences. The demo video also teases other real-life VR photography “missions” including paparazzi, wildlife (literally Pokemon Snap), war journalist, and stake out.
|
Earthquakes and hand grenades
|
Greg Henderson
| 2,016
| 5
| 15
|
When you pull the pin on a hand grenade, you have four seconds from the time you release the spoon — the aluminum lever that holds down the fuse trigger — until it explodes. Four seconds can be a very long time. I speak from the experience of throwing far more grenades than the average, well-trained U.S. soldier. After meeting my platoon in Panama during operation “Just Cause,” I served in a battalion that was at its peak in terms of training and readiness; this is where I became ridiculously familiar with hand grenades. My company commander, among the best of the best at the time, requested, and was given, an entire brigade’s quarterly allowance of ammunition to train at a higher level. Subsequently, we held demonstrations for the division’s leadership that included intense live-fire maneuvers, for which we used everything at our disposal: Bangalore torpedoes, Dragon anti-armor missiles, LAW rockets, 60mm mortars, claymore mines, C-4 improvised demolitions, all available small arms and, of course, lots and lots of hand grenades. Now let me zoom out for a moment. During my service, the definition of a foxhole was a hole in the ground excavated with small entrenching tools and measuring two rifles long, one rifle wide and armpit deep. Unfortunately, no one ever finished digging a foxhole: A defensive position, by definition, is never finished, and should be improved as long as the area is held. After a day or two the basic form of the foxhole was there, and overhead cover, concealment and camouflage ensued. One of the final touches of any foxhole is the grenade sump — a secondary hole dug at the bottom of either end where you would hope to kick an enemy grenade during an attack, then grab your buddy and dive to the other end of the foxhole to survive the explosion. For those readers who have never dug a hole, let alone a foxhole, let me just say that digging is hard work. Rain, fatigue, hunger and excessive heat or cold are often part of the experience. The only blessing is that digging doesn’t take much brainpower, which gives you a lot of time to think about other things, like what exactly you’d do during those seconds when that grenade lands next to you. The point of all this is simply to illustrate that, as a society, we dedicate incredible time, resources and effort to things that happen. We go to war and need to protect an area, the enemy attack that area, they throw a grenade that land in your foxhole. You be able to kick that grenade into a sump, grab your buddy and survive the explosion. And I support these efforts. Duty. Honor. Country. I have been, am and always will be all in. But human nature is a funny thing. We have no problem as a species spending vast sums for our tribe or country to maintain security against or advantage over opposing tribes. We expect that we can prepare and fight back. And we can. And do. The funny part is how this mindset fails to extend to other threats. For example, we subscribe to a deeply engrained and fatalistic notion that we are powerless against Mother Nature. How does one fight an earthquake? You don’t… but that doesn’t mean you are powerless. As I was saying, four seconds can be a long time. UC Berkeley, Caltech, the Gordon and Betty Moore Foundation and a coalition of research partners have teamed up with the United States Geological Survey to develop an earthquake early-warning system called . As a beta tester, I can attest that the system works. It doesn’t predict earthquakes, but it does provide early warning. I have the program installed and running on the laptop I am using to write this now. When there is an earthquake in California, I get advanced warning. If the epicenter is on the border with Mexico, the system might provide two minutes of warning. If the epicenter is on the section of the San Andreas Fault that runs five miles south of my home, I may get four seconds. Unlike the military scenario of a grenade attack that happen, it is a statistical certainty that there will be a major earthquake in California. The greatest threat to life is not just the San Andreas Fault, however. In my region, the Hayward Fault is also overdue for a large rupture. There is a 31 percent chance that the Hayward Fault will generate a magnitude 6.7 or larger earthquake in the next 30 years, and a 33 percent chance for the northern section of the San Andreas, according to the most current Uniform California Earthquake Rupture Forecast ( ). There is a greater than 99 percent probability of a magnitude 6.7 or greater earthquake in the next 30 years in the state as a whole. This is infinitely more likely than the scenarios for needing grenade sumps, but we don’t seem willing to address it with the same tenacity. Imagine the earthquake that WILL strike California in your lifetime. You are at home or work, and you are with someone you care about. Your buddy. What would you give for four seconds of warning to ensure that person is not about to be fatally wounded by a falling object or a shattering window? You can have that time. Right now, there is a bill moving through the California state legislature. Senate Bill 438, “Earthquake Safety: Statewide Earthquake Early Warning System Funding,” would provide the means to complete systems that will save lives, protect property and help communities during earthquakes. This bill removes the prohibition set forth in the original “Earthquake Early Warning” bill, SB 153, against using the general fund to support the one-time capital costs. Dr. Richard Allen of UC Berkeley and the ShakeAlert team puts the price tag of the system, which includes station installation and upgrades for the entire West Coast, as well as ongoing operation and maintenance costs, at a one-time build-out cost of $38 million, with an ongoing cost of $16 million per year above current funding levels for the seismic networks. The numbers for just the California portion of the project are $23 million in one-time costs and $12 million annually. When you consider that the estimated economic impact of a major seismic event on the Hayward Fault is as high as $1.5 trillion (that’s a T!), let alone the cost in terms of lives and livelihoods, it seems a little ridiculous we don’t have the political will to pass this critical law. $23 million? Really? California, with our world-class economy and the critical resources we provide for our country, is deserving of federal funds for this effort. But until that happens, who should finance this truly worthy effort? How about those who would benefit most? Apple, Google, Facebook… why not step up and fulfill your social contract with those who made you? After all, it is your greatest resource — your people — you will be protecting. I have said many times during presentations and interviews that the solutions we work on for addressing the constant threat of natural disasters come from a different approach. Fighting Mother Nature is a losing proposition. Stronger buildings help, but they are not the only answer. We need to learn to better build in harmony with our environs, to continually improve our defensive position — and we need to use all the tools at our disposal in that endeavor. This requires a fundamental shift in how our built environment is designed with regard to the forces of natural disasters, and in how we think about and prepare for our capacity to manage catastrophe. Right now, you can text, IM, call, tweet or otherwise reach out to @nameyourleader, @apple, @Facebook, @google, @JerryBrown and @POTUS. Let them know you support ShakeAlert. Let them know you want your four seconds.
|
How expiring patents are ushering in the next generation of 3D printing
|
Filemon Schoffer
| 2,016
| 5
| 15
|
The year 2016 is quickly shaping up to be one of the hottest years on record for 3D printing innovations. Although there is still a lot of hype surrounding 3D printing and how it may or may not be the next industrial revolution, one thing is for certain: the cost of printing will continue to drop while the quality of 3D prints continues to rise. This development can be traced to advanced 3D printing technologies becoming accessible due to the expiration of key patents on pre-existing industrial printing processes. These expiring patents — many of which were issued just before the turn of the century and are reaching the end of their lifespan — are releasing the monopolistic control over processes that have long been held by the original pioneers of the 3D printing industry. For example, when the Fused Deposition Modeling (FDM) printing process patent expired in 2009, prices for FDM printers dropped from over $10,000 to less than $1,000, and a new crop of consumer-friendly 3D printer manufacturers, like MakerBot and Ultimaker, paved the way for accessible 3D printing. The next generation of additive manufacturing technologies are making their way down from the industrial market to desktops of consumers and retailers much like FDM did. Among these include patents for three specific 3D printing technologies: liquid-based, powder-based and metal-based printing processes. Recent examples include: , , and . Considered to be the best existing desktop printing process when it comes to creating highly-detailed precision parts, the liquid-based stereolithography (SLA) printing process has been causing waves in the mainstream news recently with the announcement of the M1 3D printer from Carbon. Although Carbon has pioneered their own take on the stereolithography process to make it faster — a process called Continuous Liquid Interface Production (CLIP) — it is derived patented by Charles (Chuck) W. Hull in 1986 just before he set up 3D Systems Inc to commercialize it. The stereolithography process works by successively “printing” thin layers of an object using an ultraviolet (UV) laser focused on a vat of liquid resin. Regardless of the exact method of production, almost all of the liquid-based technologies we’ve seen recently have been enabled by the expiration of Hull’s patent. One of the most popular names in 3D printing – Formlabs – is a pioneer in bringing SLA 3D printing to the desktop at an accessible price point. However, this new free market hasn’t been without its bumps in the road. 3D Systems sued Formlabs in 2012 for patent infringement after the company launched their wildly successful Kickstarter campaign and went on to raise nearly $3 million for their Form 1 3D printer. In December of 2014, Formlabs settled and now pays an 8% royalty to for every product sold. Despite this setback, the company has gone on to become one of the most successful and well-regarded desktop 3D printer manufacturers in the business. Sintratec kit. Photo courtesy of Flickr/ . Recent examples: and The Selective Laser Sintering (SLS) powder-based printing process by Dr. Carl Deckard and his academic adviser, Dr. Joe Beaman at the University of Texas at Austin in 1984. Similar to Chuck Hull, Deckard and Beaman went on to start a company with the goal of commercializing their new technology through building and selling SLS 3D printers. Although similar to stereolithography in that they both use a laser to cure a material into a desirable object, the SLS process relies on a powder material to interact with the laser rather than liquid. As each pass is made with the laser, the powder is “sintered” or “fused” upon each subsequent layer to build up the final form. 3D Systems later acquired the technology from their competitors, but the patent expired in 2014. Similar to what happened soon after both FDM and SLA 3D printing patents dried up, this has resulted in a rise of new 3D printer manufacturers aimed at bringing this expensive industrial printing process onto the desktops of a wide amount of users. Progress so far has been slower than with SLA and a SLS counterpart of a Formlabs like printer is yet to be released. SpaceX’s Crew Dragon Considered by many to be the “Holy Grail” of additive manufacturing processes, metal 3D printing technologies – or more specifically, Selective Laser Melting (SLM) and Direct Metal Laser Sintering (DMLS) – are already being used to create custom metal parts for a wide variety of manufacturing applications ranging from custom race car parts for launching into the outer atmosphere. While large automakers and Elon Musk are easily able to foot the bill for an industrial-grade machine to use at their free will, the cost of ownership and maintenance is out of scope for most everybody else. Interestingly, a for selective laser melting held by Germany’s Fraunhofer Institute for Laser Technology, will be expiring in December of 2016. Just as we’ve seen with liquid and powder-based technologies, the expiration is expected to bring with it a new group of manufacturers that will drive cost down dramatically. While it’s still too early to tell how this will affect industries in the long run, the impact could be huge as no other 3D printing process has been able to consistently produce reliable parts that can be used functionally as metal 3D printing has. With year over year growth in consumer 3D printer sales – nearly 200,000 units priced $5,000 or below were sold in 2015 alone – and the industry itself expecting to grow from generating approximately $4.1 billion in 2015 to as much as $16.2 billion within the next four years; it’s clear that with the crumbling of monopolies and price disruptions, desktop 3D printing is moving forward beyond the hype. Needless to say, the dream of industrial quality desktop printing is still very much alive.
|
Why haven’t there been more unicorn mergers?
|
Michael Jones
| 2,016
| 5
| 15
|
Last year was a massive year for , with industry giants like Dell and Aol finally closing deals that were large enough to make 2015 . Predictions mostly held that 2016 another , as they are one of the best ways to continue growing share price after a prolonged bull market. That’s why it’s somewhat surprising that five months into the year we’ve seen relatively few , , where you would think that struggling would make attractive targets. Even companies like Yelp, which at one point were quite vocal about seeking buyers in 2016, going so far as to hire Goldman Sachs to line up suitors, decided . Here are a few possibilities as to why u aren’ partnering up. Part of the reason we ’ seen many amongst Silicon Valley’s is structural: Private companies have a harder time pursuing than publicly traded ones. A lot of it boils down to valuation; it’s much harder to determine whether a union between two companies is a marriage of equals when their value is largely subjective — one company might have better margins, but one might be growing faster. With companies , it’s getting harder to suss out potential targets. The has also done well for the last couple of years, which has made it harder for larger companies to acquire smaller ones. The Russell 1000, which tracks smaller companies, trades at 18 times its 12-month earnings forecasts, up from 16.5 last year. The Federal Reserve has also raised interest rates, and will likely do so again, making it expensive to borrow money. One of the biggest acquisitions this year was Air Alaska’s takeover of Virgin Atlantic, where the former is expected to pay about $2 billion in the deal, . Despite the price tag, the deal was well-received, suggesting that we might see acquirers paying a premium for the right target. This is all another way of saying that potential deals are getting scrutiny: ’s a higher bar that needs to be met for timing, terms and price. But the government has also in to deals that eschew local taxes or represent potential monopolies. is actually historical precedent for political activity hampering business activity, and acquisitions . ’s no shortage of drama on the campaign trail this year, and it’s possible that a lot of decision-makers are waiting until 2017 before making any major changes. The last noteworthy spike in technology came back in 1999, an association that might not be doing anybody any favors. When Time Inc. publicly mulled over buying Yahoo!, many noted that the partnership would have very similar to the between Aol and Time’s parent, Time Warner, back in 2000, a deal widely regarded as “ ” for both parties. However, the prospects for media and technology companies to engage in meaningful partnerships is a great deal developed than it was 15 years ago. Despite some dire warnings that 2016 would vanquish , it hasn’ really; the amount of capital going into high-growth companies may have stopped at north of $12 billion for the first three months of 2016. Looking at technology companies in 2016 through a 2000-lens is like comparing apples and oranges. Aside from Seamless and GrubHub, which must contend with the , ’ really very many notable amongst the u cohort. However, if founders can get over their valuation concerns and investors can shake off an understandable, but misguided sense of déjà vu, for a lot of reasons it’s not too late for 2016 to turn its act around.
|
DogVacay’s Aaron Hirschhorn on how founders can control the relationship with VCs
|
Harry Stebbings
| 2,016
| 5
| 15
|
Fundraising is a game of competition. VCs compete against other VCs to fund the latest startups. Startups compete with other startups to attain funding from the brand name VCs. So, to be successful fundraising, “the founder must create a competitive process between VCs,” according to Aaron Hirschhorn, the founder of DogVacay, in our latest interview. Parkinson’s Law states that, ‘tasks expand so as to fill the time available for it’s completion’. This can be attributed to fundraising. As a result, it is the role of the founder to be strict in setting and maintaining a deadline by which the fundraising must be complete. This allows the founder and team to plan and set milestones but also provides impetus to VCs to involve themselves in a round before it closes. Not all money is the same, evident through Hirschhorn description of the value of having Bill Gurley on the board. Therefore, founders should have a precise and definitive list of the VCs they would like to participate in their round. The number in this list varies according to founder. Gagan Biyani at Sprig told us, ‘I write down 5 VCs and that is it, those are the ones I want’. Hirschhorn provided a more varied view suggesting that ’30 names was the optimal number to go for when fundraising’. In fundraising, momentum is everything. As a result, it is crucial to time your pitch to correlate with milestones that show meaningful growth. For Hirschhorn, this was when DogVacay was doing ‘a couple of hundred thousand dollars a month’. For a SaaS company this could be when you sign a big enterprise client. Whatever it is, VCs are drawn to growth and momentum so correlate your raise with those moments. The first time a founder presents to a VC, it will not be the best presentation they make. Not by a long shot. Consequently, never place the VCs that you most want in your round in the first week of pitching. Practice does make perfect with Hirschhorn suggesting the optimal time for pitch perfection is week 3. By this time, the founder and team have ironed out any inconsistencies and perfected the narrative of their story. A major pitfall of many founder is their inflation of where they are at in their raise. Do not fall victim to this. Transparency is key in fundraising. As Hirschhorn describes, ‘they all talk to each other, Sand Hill knows everything’. Hence, it is all about clarity, let the VCs know where you are in the process and how you want to move forward from here. Ultimately, the VC industry would not exist without founders creating startups, as Hirschhorn emphasises, “it is up to the founder to control the dialogue.”
|
3 ways startups are fighting for digital and physical security
|
Shannon Farley
| 2,016
| 5
| 15
|
Internet accessibility for all people, of all ages and in all places has unleashed unprecedented resources and opportunities. It also unlocked our digital and physical security. The sacrifice of safety is an unintended consequence of the Internet age. Can the tools that caused this vulnerability be reappropriated to make us safer? There’s a rise in who think so. They are building platforms to strengthen our personal safety where we need it most — whether on the street, at school or online. -crowdsourced data may help you avoid speed traps, but it can’t tell you the safest route. Sadly, . A few are turning location tracking data into a safety resource by allowing people to transform their personal community into a connected security system. serves as a virtual buddy system that allows anyone in your address book to track your path for potential safety risks in sketchy situations, like walking alone at night. Friends or family members can track your route home in real time, and users can instantly send out an alert if an emergency arises. Mobile safety communication platform provides a similar tech solution, SafeWalk, for virtually walking home friends and family. The platform also supports personal safety by mapping reported information, enabling real-time location sharing for critical situations and broadcasting alerts through international geofencing. Other startups, like , are approaching this issue with a hardware solution, arguing that a real-life panic button is easier to access in dangerous situations than pulling out your phone. While mapping apps rely on your close network of friends and family, reporting often calls for anonymity, which means you need better technology. Survivors of assault and online bullying clamor for tech interventions that protect their identity, expose their assailants and empower the user. Tech nonprofit is building such a solution for survivors of campus sexual assault. , but a mere . This is partly because the in-person reporting process is retraumatizing. Most campus sexual assaults are committed by repeat offenders who the victims personally know. If repeat offenders are apprehended early, . Callisto makes the reporting experience less daunting by enabling its users to create a digital time-stamped record, which is then electronically shared with campus administration. Survivors even have the option to withhold their report unless another student names the same perpetrator, protecting survivor confidentiality, increasing reporting and improving accuracy of reports. Indian startup is preventing assaults with its . SafeCity users anonymously report violence, harassment and sexual assault in public places and the platform aggregates these incidents into risk hot spots. SafeCity, structured as a nonprofit, aims to make their data analytics open for policy makers. With policy interventions as simple as adding street lights to reduce assaults, SafeCity is increasing proactive measures in cities. Other organizations are using technology to intervene and provide support when unsafe situations arise. provides a text-based crisis “hotline” targeting teens who prefer text over voice calls. Crisis Text Line’s communication platform provides real-time responses from trained individuals in crisis. When a young person texts saying they have suicidal thoughts or think a friend has a substance abuse problem, a trained representative from Crisis Text Line immediately texts back with advice and the next steps on how to handle the situation. Since launching in August 2013, Crisis Text Line has powered the exchange of more than , making it the largest open crisis data set in the United States. Prevention may actually be the best form of intervention. ; thanks to technology from , there’s a working solution to stop cyberbullying before it happens. When an adolescent attempts to post a derogatory message on social media, ReThink uses patented context-sensitive filtering technology to determine whether or not it’s offensive and gives the adolescent a second chance to reconsider their decision. ReThink’s technology is backed up with research that proves young people change their minds 93 percent of the time when pushed to rethink a decision. The same study showed that when using ReThink, the inclination for an adolescent to post bullying message dropped 65 percent. The trend of reclaiming technology to secure personal safety is promising. Unlike other interventions, tech is best suited for scale. As adoption grows, we’ll be able to aggregate more data around dangerous places and spaces, making it easier to deploy proactive solutions to keep individuals and communities safe.
|
How threat intelligence sharing can help deal with cybersecurity challenges
|
Ben Dickson
| 2,016
| 5
| 15
|
In the ever-shifting landscape of cyberthreats and attacks, having access to timely information and intelligence is vital and can make a big difference in protecting organizations and firms against data breaches and security incidents. , growing smarter and becoming more sophisticated, which effectively makes traditional defense methods and tools significantly less effective in dealing with new threats constantly appearing on the horizon. One solution to this seemingly unsolvable problem is in order to raise awareness and sound the alarm about new attacks and data breaches as they happen. This way we can avoid major security incidents from recurring and prevent emerging threats from claiming more victims. Threat intelligence sharing has risen in prominence, giving birth to initiatives such as the , a conglomeration of security solution vendors and researchers that have joined forces to collectively share information and protect their customers. We’ve also seen government-led efforts, such as the ( ), which is meant to ease the way for businesses to join the threat information sharing movement. The evolution of cyberthreat intelligence sharing is culminating in the development of platforms and standards that help organizations gather, organize, share and identify sources of threat intelligence. Cyberthreat intelligence is also shortening the useful lives of attacks and is putting a heavier burden on attackers who want to stay in business. There’s still a long way to go, but the inroads made are already showing promising signs. Information gleaned from internal networks and virus definition repositories can serve as sources of threat intelligence, but much more needs to be done to deal with the constant stream of malicious IPs and domains, hacked and hijacked websites, infected files and phishing campaigns that are being spotted on the Internet. “Today’s cyber threat landscape is polymorphic in nature — constantly changing and making it nearly impossible to detect with traditional security approaches,” says Grayson Milbourne, Security Intelligence Director at cybersecurity firm . The company’s has found that 97 percent of 2015’s malware have been seen on a single endpoint, and more than 100,000 new malicious IP addresses are launched every day. “Given the evolution of malicious code and constantly changing environments, it’s critical that security controls adapt quickly and dependably,” Milbourne says, and he underlines the need to stay ahead of current threats and be able to predict future attacks, which can be achieved through the use of a collective threat intelligence ecosystem. Many tech firms are now offering security solutions founded on the cyberthreat intelligence sharing concept. Webroot’s own proprietary intelligence sharing platform, , gleans threat intelligence from endpoints and combines it with input from security vendors to provide valuable real-time insights into threats and greater visibility into the behavior of an attack. The threat intelligence sharing trend has led other leaders in the tech industry to adopt similar initiatives. Last year, IBM declared its own threat intelligence sharing initiative, , a cloud-based platform that extends the tech giant’s decades-old security efforts and allows the clients to share their own intelligence in order to accelerate the formation of the networks and relationships needed to fight hackers. “This community-based approach enables security teams to associate and uniquely protect one another from threats in real-time,” Milbourne explains. “As soon as a threat is detected on one endpoint, all other endpoints using the platform are immediately protected through this collective approach to threat intelligence.” Threat intelligence sharing comes with its own caveats and presents a few challenges. “In many cases,” says Jens Monrad, Consulting System Engineer at cybersecurity firm , “organizations end up with a lot of data, sometimes just raw, unevaluated data, which end up adding an extra burden to their security team, increasing the number of events and alerts rather than decreasing it.” Collaboration between industry peers can help improve the relevance and quality of the shared intelligence, because threats and attacks are often targeted at specific sectors such as finance, banking or retail. This way, industry leaders can better understand the threat landscape and gain insights into practices deployed by others in the industry to better safeguard their own organizations. Instances of industry-level threat sharing efforts include the recent among nations, which took place in the aftermath of the . FireEye has implemented this model with its Advanced Threat Intelligence Plus platform, which enables clients to develop threat sharing communities with trusted partners. The cybersecurity firm recently partnered with Visa to develop a joint , which focuses on cyberthreats toward Visa and its customers. Business, privacy and legal concerns are also proving to be barricades in efforts to share threat information. As Scott Simkin, Senior Threat Intelligence Manager at Palo Alto Networks , security vendors have been previously loath to share information to avoid losing the competitive edge, private companies fear inadvertently sharing sensitive customer information and government agencies have strict controls on the information they share. Some of these issues can be dealt with through the use of standards, , a set of free, available specifications that have standardized threat information and help with the automated exchange of indicators of compromise (IOC) and other relevant data without leaking personally identifiable information (PII). The CISA legislation has also helped overcome challenges by lifting some of the liabilities firms and organizations would otherwise be exposed to if they shared data about security incidents. As for the business side of things, the sheer number of new threats that are being identified on a daily basis is slowly convincing vendors that sharing threat intelligence may prove to be the only way they can protect their interests. The evolution of the cyberthreat landscape has reached a point where it is beyond any individual or organization to defend themselves and their interests against the ever-shifting array of threats. “It is only a matter of when they will become victims of cyber attacks — not if,” says Chris Doggett, SVP of Global Sales at . This issue can only be addressed through a pooling of efforts that expands beyond the disciplines involved in dealing with cyberthreats, Doggett suggests, which should include “sharing cyber threat intelligence, collaborating to minimize vulnerabilities, gaining consensus on global standards for acceptable conduct in cyberspace, and international cooperation to enforce local laws and international standards.” This is an approach that has been recently put to test in fighting , which has been growing at an explosive rate and is causing millions of dollars in damage to victims. A collective effort is being led between government agencies, cybersecurity firms and law enforcement to provide effective protection from ransomware, offer recovery solutions and disarm and apprehend the criminals behind the attacks. On the protection level, tech companies are constantly sharing information about ransomware attacks to better understand how to avoid it and improve the efficacy of security and anti-malware tools. In tandem, efforts are being led to improve data protection and recovery solutions, such as cloud backups and data integrity tools, and security firms are working on solutions to crack the encryption algorithms of specific types of ransomware and disarm them for good. Security researchers are also collaborating with regional and national law enforcement agencies to track and arrest the cybercriminals involved. An example of such efforts is to apprehend the individuals behind the CoinVault and BitCryptor campaigns. Carbonite is working to develop its own proprietary tools to help track malware attacks and respond to them faster and more effectively. “Based on the data we have gleaned, research, and the information sharing with others in this space,” says Doggett “we are now in a position to participate actively from a thought leadership perspective and do our part to arm all users and organizations with knowledge and tools which we believe will allow them to avoid becoming victims of ransomware attacks in the future.” Cybercriminals have been sharing knowledge, tools and experience for a long time, which has lent to their success in staging major data breaches over the past months and years. It’s long past time that the tech community follows suit and teams up to improve general security and mitigate threats to individuals and organizations. Threat intelligence sharing is already helping detect threats in real time and protect users from malicious encounters. It should become an essential aspect of any organization’s security program if we are to deal with the threats of the future.
|
After losing his leg, a Syrian war survivor turns to tech to 3D print prosthetic limbs
|
Farid Y. Farid
| 2,016
| 5
| 15
|
dead silence for five minutes after a shell blew his left leg off in a small town southwest of Damascus in April 2013. A mathematics student turned volunteer paramedic Hasna had opened the ambulance door after loading injured rebels from the battlefront in Khan El Shih when Bashar Al Assad’s Syrian military forces had shelled the area. “The explosion was so loud that I didn’t even realise that my leg was no longer there. It all happened so quickly” Hasna told TechCrunch, on the sidelines of the tech conference in Berlin. Hasna was captivated with the democratic groundswell that was building in Arab capitals in 2011. He had been going to demonstrations against Al Assad’s regime throughout the year facing tear gas and bullets. Yet, as the situation worsened into a full blown civil war that has now claimed over 250,000 lives and displaced over 4 million, he decided to leave Qatana, his hometown in rural Damascus, in February 2013. Police raided his home searching for him and detained his father instead, who ironically worked as a translator in the defense ministry. He was released weeks later after spending time in the notorious . In Khan El Shih, the town , Hasna worked in a makeshift hospital that was housed in a underground cooler part of a vegetable warehouse. “I saw the stuff of nightmares there. Lots of corpses and severe injuries, we treated everyone even soldiers from the regime. We didn’t discriminate we had to save as many lives as we could” he added. He was in and out of consciousness as his severed leg haemorrhaged badly and his hospital colleagues immediately sent him along with 13 other fighters to Jordan for treatment. Smuggled in a truck and dodging bullets and checkpoints in the dead of night Hasna barely survived. “There was no water or food. We just wanted to get to Amman. Two of our group unfortunately died on the way as their injuries were so severe”. After treacherously making it across the border, he bounced around from hospital to hospital undergoing five major operations on his amputated leg. During his rehabilitation he was part of a joint State Department and Red Cross program aimed at training Syrians to be prosthetic technicians to develop new limbs for other war survivors. He saw a 3D printer for the first time and was hooked. “It was weird. I knew how a 2D printer worked but I wanted to find out more about 3D printers and learnt voraciously on the web using open source software” Hasna pointed out. “You just have to practice and use your own hands”. He began working with a 3D printing startup in Amman called 3D MENA and learnt how to program a microchip in three days mastering the technology of bionic prosthetics using . Hasna’s attention turned to finding low cost solutions for prosthetics that were far from perfect in their adjustment for amputees. He developed an for his friend Ahmed Orabi, who lost his sight in sniper fire in Syria, allowing him to sense the depth of objects when walking by feeling vibrations. Hasna also printed a durable rubber part between the ankle and his heel that helped stabilise his prosthetic leg all for around $2. “There’s practical applications using 3D printers for low cost solutions” he added. , an Egyptian American technology innovator who invited Hasna as part of the at re:publica this year agrees telling TechCrunch, “there’s a whole movement in the region now with people realising that the manufacturing process is becoming decentralised and if you want to make something you can now do it yourself” With the success he found in Jordan’s supportive startup community, Hasna also was facing hardships after two years living in Amman as authorities started cracking down on Syrians working in the that has hosted over 1 million asylum seekers. He boldly decided with his injured room-mates in September 2015 to embark on the arduous trip to . They flew to Istanbul then rode down to Izmir, found a smuggler, paid over $1000 each and embarked at dusk on a packed inflatable 6m boat to the island of Lesbos in Greece. “Between you and death is a wave” Hasna mournfully recounted. “It was a trip full of anguish”. He made his way along with his other disabled friends from Greece to Macedonia then Croatia, Hungary, Austria and finally Germany within a week, all on prosthetic leg. As he waits for his German residency papers to be accepted, Hasna holds programming sessions for kids in the camp where he now lives in Berlin using Arduino and imparting some of his newly acquired knowledge of digital fabrication in prosthesis. He’s also honing his skills for , an organization he worked with in Jordan, with laser cutters and milling machines. “My world opened up with open source technology. It changed my life for the better and I want to pass that on to the next generation” he said.
|
South Korea’s government launches its first accelerator program for international startups
|
Catherine Shu
| 2,016
| 5
| 12
|
The Global Startup Campus in Pangyo, South Korea In a bid to make South Korea’s tech industry more diverse, the government has created an accelerator for startups from around the world. Called the K-Startup Grand Challenge, the program is being organized by South Korea’s (MSIP), in partnership with Seoul-based accelerators , , , and . It will . MSIP director and spokesman Dr. Chang-yong Ahn told TechCrunch that this is the first time the government—which has pledged $2 billion per year since 2013 to the local startup ecosystem—has directly supported foreign startups. It wants to encourage more companies to set up business there and plans to make K-Startup Grand Challenge an annual program. “Most innovation is borne from diversity—just look at Silicon Valley,” he said. “At this stage the startup and business ecosystem in Korea lacks a high level of diversity and the K-Startup Grand Challenge is one step towards creating a more diverse business environment in Korea.” Forty startups will be selected to participate in the three-month long program, which begins in September and includes mentoring from 15 leading Korean tech companies, including Samsung Electronics, LG Electronics, Kakao, and Naver. A demo day will take place in December, after which twenty startups will receive $33,000 in funding from the government, with the top four getting an additional $6,000 to $100,000. All companies will work out of the Pangyo Global Startup Campus, which was opened . The program is accepting applications from companies in all industries with growth potential, but it is particularly focused on gaming, finance, bio-tech, software, and information and communication tech. “We would love to see some of the companies incorporate in Korea, leading to the creation of jobs and development of Korea as a regional technology and startup hub,” said Ahn.
|
There’s an online soccer game used to fight gender based violence
|
Claudia Cahalane
| 2,016
| 5
| 15
|
Throughout the game, the player is asked to make positive or negative choices around sexist behaviors via a series of soccer and cultural challenges. Image of girls playing the BREAKAWAY game. Photo courtesy of . BREAKAWAY and Grassroot Soccer believe they can play an important part in shifting the prevalent, harmful gender norms in South Africa among a new generation of young people. The task is urgent and we believe the universal language of soccer could play a vital role.
|
Apple invests $1B in Didi Chuxing, China’s largest ride-hailing app
|
Catherine Shu
| 2,016
| 5
| 12
|
made the bombshell announcement today that it has invested $1 billion in China’s top ride hailing app. (formerly called Didi Kuaidi) is often described in U.S. media as Uber’s Chinese rival, but it already dominates the market by far. The company and holds 87 percent of the country’s private ride-hailing market. In an , Apple CEO Tim Cook said, “We are making the investment for a number of strategic reasons, including a chance to learn more about certain segments of the China market. Of course, we believe it will deliver a strong return for our invested capital as well.” Didi Chuxing told Reuters that this is its single largest round of funding so far. It claims to currently complete more than 11 million rides a day and have over 14 million drivers on its platform. The company’s other major investors include Tencent and Alibaba, two of China’s largest Internet companies, and SoftBank. Meanwhile, , which runs the country’s top search engine and online map services, including apps. According to a , the company was then in the process of finalizing a round for $1 billion at a valuation of $20 billion. A Didi Chuxing representative said Apple’s investment is part of the same round, but declined to confirm the valuation. TechCrunch has also emailed Apple for more information. In a press release, Didi Chuxing founder and CEO Cheng Wei said, “The endorsement from Apple is an enormous encouragement and inspiration for our four-year-old company. Didi will work hard with our drivers, riders and global partners, to make available to every citizen flexible and reliable mobility choices, and help cities solve transportation, environmental and employment challenges.” China is on its way to , but the company has faced a few recent setbacks there. After years of giving it a relatively free rein for a U.S. tech company, the Chinese government ordered the closure of iBooks Store and iTunes and Movies . Furthermore, while Apple’s sales in China are still growing, it’s at a as the Chinese economy becomes sluggish and the smartphone market in general faces less demand. Concerns about prompted activist shareholder Carl Icahn to sell his entire stake in the company earlier this year. Investing in Didi Chuxing allows Apple to grab a foothold in the Chinese tech market that reaches beyond iPhones—and also gives it a new platform for its other technology. For example, if Didi Chuxing uses , that gives Apple another outlet to sell software services in China beside the iPhone, as well as valuable data to tailor apps and maps for Chinese users. Didi Chuxing is also a major potential customer
|
In the singularity, we’ll still be using the same garbage trucks
|
Colin O’Donnell
| 2,016
| 5
| 12
|
Our cities face a choice: find creative ways to integrate today’s technology into their DNA or have entire systems get left behind, widening the equality gulf for people who rely on them. Urban populations are growing, infrastructure is aging and needs are evolving (access to the Internet is as important as electricity), but the foundations on which cities rest are set in stone. To keep up with the demands of modern urban populations, cities need ways to integrate new solutions into old ways of doing things. Starting from scratch isn’t the only way to build a smart city. We can turn today’s legacy cities into successful smart cities by incorporating technology and innovation in layers, creating a 21st century experience overlaid on a 20th century infrastructure. Upgrades don’t have to come in the form of $100 million fixes. Bit by bit, cities are becoming smart by the application of consumer technology to the urban space. It’s not a question of it already is. The question is authentication across the networks that are already there. The challenge isn’t covering the city in sensors; it’s already bristling with cameras and motion detectors. The challenge is creating the incentives for people to collect and share data in responsible ways. The promise of building a smart city from the ground up — a greenfield utopia that would give technologists and urbanists the opportunity to design future-proof, technologically native systems and infrastructure — is intriguing, but what about legacy cities that have trillions of dollars in fixed infrastructure like streets, light poles, transit systems and garbage trucks? How do these old systems stay relevant as continues to march on? While city foundations don’t change much, we alter, update and leverage the ephemeral and transient layers of cities (things like mobile technology, beacons and smart data applications) to keep the more static layers (street grids, water systems) relevant and user-friendly. Technologies built on top of these massive systems have the advantage of serving a built-in, wider audience and tie into daily life in a way that no standalone mobile app ever could. Sixty million visitors on New York subways every year could certainly solve the empty room problem for a transportation startup. Cities need to get with the program. Regulations and planning aimed at 20th century problems are holding back new digital-enabled services — electric bikes, drones, alternative transportation and gig economy job growth. Instead, cities should figure out what they need to do to make these new services safe, how they can support them and, in return, benefit from their success. For example, in LA, they’re to raise funds to help pay for affordable housing. If we don’t get it right, cities risk getting ripped apart with individual layers isolated from one another. If we move too fast without city buy-in, entrepreneurs risk slamming into regulatory walls that force wasted energy and capital on legal battles and political dramas. And if cities move too slow, startups will simply find a way to work around them, leaving transit systems, power grids and highways to moulder. Rather than throwing existing systems to the wayside, by layering on today’s mobile, sensing and data technologies to existing infrastructure, we can keep our infrastructure current. By enhancing function and usability as we go, we can keep these dinosaurs relevant in the digital age. At the center of these upgrades, of course, are privacy, user protection and equity. Any city-related solution will have to put user safety and experience at the forefront, ensuring any data collected is subject to strong access and user controls, and that services and experiences are opt-in while serving the widest audience possible. Both cities and private industry must be on the hook for this. Finally, for startups and cities to work together to make all of this happen, there needs to be an understanding of the speed of cities. A city is a complex web of systems with layers that move at radically different paces. On one end is the ephemeral: consumer technology, data and mobile apps that are constantly changing. Thanks to this technology, urban innovation is happening at lightening speed between agencies, with city vendors, amongst new startups taking over entire industries and especially among consumers and small businesses with products like and . Not to mention what’s happening with the social and commercial aspects of cities: Facebook, Instagram, Apple, and Android Pay all have a major role to play in making cities more responsive, effective and, yes, smart. Meanwhile, on the other end of the spectrum, traditional city infrastructure moves at a glacial pace. The conduits that carry the broadband lifeblood of New York City were built more than a century ago. Garbage trucks and subway cars have a useful life of , meaning a garbage truck bought in 2016 will be on the streets in 2046 — we will have co-existing with a massive, diesel-breathing, human-driven steel behemoth! The trick is to not get bogged down in the many slow moving layers of the city, but build on top of the legacy infrastructure, taking advantage of the built-in investments and amazing scale. The greatest successes will rest on the efficient mixing of old and new, fast and slow, nimble and titanic. It’s up to technologists and entrepreneurs to learn the dance, and cities to let them in.
|
Accion Systems raises $7.5 million in Series A to accelerate production of miniature space thrusters
|
Emily Calandrelli
| 2,016
| 5
| 12
|
, the company developing miniature space propulsion systems, has raised $7.5 million in Series A funding led by . RRE Ventures, Founder Collective, and Slow Ventures also participated in the round. The company had previously raised $2 million from seed funding and $6.5 million from partnerships with the Department of Defense. While perhaps best known for their early in Nest, this isn’t the first aerospace deal for Shasta Ventures. Commercial drone company , and , a company that plans to launch a constellation of 100 satellites, are both in Shasta Ventures investment profile. Instead of flying drones or launching satellites, Accion Systems has developed a unique space propulsion system that’s light enough to include on small satellites and, when many are used at once, can be scaled to provide propulsion for larger spacecraft as well. Accion ion thruster / Imager courtesy of Accion Systems Founded in 2013, Accion Systems commercialized a miniature electrospray ion engine that had previously undergone years of development and testing in a lab at MIT. Each ion engine is about the size of a penny and generates thrust by accelerating charged particles at very high speeds. A number of these engines could be placed on any given satellite, depending on the size of the satellite and the desired level of thrust. Ion engines have long been an attractive option for spacecraft propulsion. NASA has been using solar electric ion propulsion since the 1950’s and recently in an effort to make it more efficient for deep space exploration. Illustration of the Dawn spacecraft with its traditional solar electric ion propulsoin system / Image courtesy of NASA But these traditional ion engines are relatively bulky and, like other propulsion alternatives, are impossible to include on smaller satellites. And the number of small commercial satellites is growing. Cheaper, smaller spacecraft technologies and a growing demand for Earth observation-related applications have contributed to an small satellite and microsatellite market. Those companies want their small satellites to be more capable, which often requires the addition of in-space propulsion. Miniaturized ion engines can allow those companies to include propulsion on their systems for the first time, thereby enabling more capable satellites. Not the least of these capabilities is the extension of a spacecraft’s operational lifetime in orbit before it succumbs to atmospheric drag and burns up on reentry. More time in orbit can mean more derived revenue from each satellite. Accion Systems MAX-1 propulsion system / Image courtesy of Accion Systems Accion Systems is competing with space industry veteran, Busek, for business in the small satellite market. Natalya Brikner, CEO of Accion Systems told TechCrunch that while Busek has nearly 30 years of experience over Accion Systems, her company’s patented design allows it to have a longer operational lifetime than Busek’s equivalent electrospray product. Brikner told TechCrunch back in February that they were to take their first orders. To date, the company has signed with three partners: one government and two commercial customers. “These first orders will be delivered in 2017 and we’re currently signing a limited number of additional partners for this first batch of deliveries.” Natalya Brikner, CEO Accion Systems With the additional funding, Accion Systems plans to transition into full production mode. “We’re at an inflection point in the life of the company. We’re transitioning from purely engineering and product development to production. It’s time to grow the team, build a strong operational foundation, and start delivering our first orders.” Natalya Brikner, CEO Accion Systems Miniaturized space propulsion for small satellites holds a lot of promise for the growing small satellite market. Further flights with commercial partners in the next few years will help determine if this technology can live up to the hype.
|
null |
Josh Constine
| 2,016
| 5
| 23
| null |
Four ways African countries can ensure digital innovation benefits the entire population
|
Claver Gatete
| 2,016
| 5
| 12
|
When Rwanda’s socioeconomic turnaround is discussed, the country’s disciplined approach to economic growth and commitment to avoiding the pitfalls of corruption are usually highlighted. However, while these components certainly played a significant role, another key factor in Rwanda’s transformation is often overlooked. Following the 1994 genocide, the country’s leadership truly embraced the responsibility that came with their new positions, but were also humble enough to recognize that any process that focused too heavily on a handful of leaders would inevitably fail. Instead, they pioneered a development philosophy called inclusive development that centers on the basic recognition that every citizen of Rwanda is a partner with the government in driving and achieving our goals. Unless we collaborated directly with the people, the benefits of our work would never be maximized. As technological innovation and startup success spurs economic growth around the world, there is a growing concern that this new wealth generation is benefitting only a limited few, instead of sparking a global revolution that removes boundaries, promotes equality and reduces the wage gap. Yet, this does not need to be the reality, and Africa is uniquely positioned to apply the following lessons to ensure that technological progress doesn’t just create a new class of wealthy individuals, but drives socioeconomic progress for the wider population. One of the biggest concerns in the technology industry is the limited role that women have played in core development positions, which in turn has an impact on the disappointingly low number of women in leadership roles at major tech companies. But for any society that wishes to reap the benefits of digital innovation, promoting gender equality is an absolute necessity. Beyond the critical value of having diverse opinions emanating from leadership positions, enabling the entire population to develop as future innovators allows a society to tap into the full potential of its citizens and benefit everyone. In many African countries that are only now cultivating flourishing technology industries, there is a unique opportunity to right this critical wrong before it even becomes an issue. In Rwanda, for example, there is a near 50 percent rate of women enrolled in computer engineering courses in higher education and programs are being run throughout the country to introduce young girls to engineering at younger ages, thereby fostering a tech ecosystem that will promote equality. The more diverse the pool of potential technological leaders, the more likely we will be able to identify the solutions that can change the world. It isn’t just an ethical necessity, but an economic one as well. For every successful startup, there are a handful that never achieve their potential due to a lack of opportunity. While it is important that entrepreneurs work hard to find ways to succeed, it is equally important that governments identify and fill certain gaps where they can be of assistance. Even now, when there are many external investors intrigued by the growth of the African economy and looking to get involved, there are still gaps that can best be filled by the unique resources and reach of government institutions. Whether it be funding, opportunities to pilot new technologies or other public/private partnerships, governments have a powerful ability to serve an important function in the flourishing of an innovative ecosystem, especially while external investors gain confidence in it. Cross-pollination between the private and public sectors can have huge benefits, and the more governments can actively facilitate these relationships, the better. The battle to create more efficient bureaucracies in Africa is ongoing, and while there have been tremendous steps taken across the continent to follow the lead of countries that have made it a priority, this is just the first step. Ensuring that the wider population benefits from the progress achieved through digital innovation is directly connected to how much of this we, in turn, invest back directly into the people. Rwanda’s “disadvantage” of being a landlocked country taught us very early on that our most important resource is our people. Whether a country benefits from revenues from oil, mining or any other natural resource, the key to long-term growth is enhancing the capabilities of its human capital. Education must be the focus — and not just in the classic sense of primary, secondary and higher education facilities, but also in vocational training and hands-on skill-building for society’s youth. The more people are empowered to take control of their own financial destiny, the more they will be able to lead changes in their community and take ownership over the wider development of their countries. While it isn’t exactly an earth-shattering proclamation that infrastructure is key to development, there is a specific application in Africa that must remain at the center. We have the opportunity to learn from the challenges and mistakes of other regions and avoid the long-term negative implications of certain decisions. Sustainability in our infrastructure development and a focus on the next generation of technologies will dictate the ability to drive success. The growth of the global technology industry gives every country the chance to benefit entire populations through digital innovation, but only if governments can offer support and enable citizens to take advantage. Africa’s qualities as a young, developing and motivated region will do well to ensure that the age of digital innovation will bring success to the wider population.
|
Tactical Technology educates women’s rights advocates on online safety
|
Claudia Cahalane
| 2,016
| 5
| 12
| |
A folding robot made of pig parts that removes batteries from stomachs with magnets
|
Brian Heater
| 2,016
| 5
| 12
|
Researchers from MIT, the University of Sheffield and the Tokyo Institute of Technology joined forces for a project that reads like something out of a William Burroughs novel. Crafted from dried pig intestines, is designed to hatch from inside a swallowed capsule and unfold like an accordion. Once inside the swallower’s stomach, the little meat ‘bot moves around with a “stick-slip” motion, utilizing the friction of its surroundings to propel itself forward, while steering with magnetic fields. Those magnets serve a dual function — they also go to work picking up small batteries swallowed by the ingester. Apparently it’s a more widespread problem that you likely know. According to MIT’s numbers, 3,500 watch batteries are reported as swallowed in the U.S. each year. Some of them are gotten rid of the old fashioned way (you know, poopin’), but sometimes they burn the stomach or esophagus tissue while in there. So researchers figured this would be as good a use as any for the folding robot they’d been working on. There are more pig parts involved in this story, incidentally. Once researcher Shuhei Miyashita determined this was a solid application for the robot, he went out and bought a ham and stuck a battery inside. Here’s fellow researcher, Daniela Rus: “Within half an hour, the battery was fully submerged in the ham. So that made me realize that, yes, this is important. If you have a battery in your body, you really want it out as soon as possible.” The team went back to the giving pig when it came time to design a fake stomach for testing, using a pork stomach to determine the mechanical properties of the digestive system. Ultimately, however, they built the model out of silicone, adding in water and lemon juice f to simulate stomach acids.
|
The technology-driven transformation of wealth management
|
Skip Maner
| 2,016
| 5
| 12
|
In today’s digital world, everyday functions such as making dinner reservations or hailing a taxi are done at the touch of a button. Similarly, other, more complex and esoteric functions, such as wealth management, are also moving toward automation. This evolution is enabled by the creation of state-of-the-art software, which has helped make wealth management more consumer-friendly, affordable and accessible than ever before. While full automation may be appropriate for investors in the early stages of wealth accumulation (<$500,000 in household wealth), a wealth management model that combines best of breed automation with a human advisory element may be a better course for those with greater levels of wealth (>$500,000 in household wealth). Robo-advisors represent the “Uberization” of wealth management, offering a pure, automated approach. Providers such as and offer super-streamlined investment platforms where the robo-advisor gathers feedback on an investor’s risk tolerance, objectives, time horizon and other background demographics and uses algorithms to help determine a suitable investment strategy. These solutions are perfect for the lower “strata” of the wealth world, whose estate planning and tax needs are less complex, for example. While robo-advisors are increasing in popularity because of their lower fees and ease-of-use, there are important services where the current offering from robo-advisors falls short. Just as millennials won’t hesitate to take an Uber or use Airbnb while traveling, a wealthy tycoon will continue to opt for the limo and owned beach home. The middle strata of the wealth management world is, as expected, somewhere in between. They need a low cost, tax-efficient way to invest, but also want to know that there is a human making the ultimate call on their wealth management decisions. Take personalization, for example. An investor cannot set up a meeting with a robo-advisor to review their account, adjust their investment strategy as their situation changes, ask questions or share concerns. Robo-advisors are not geared for this level of personal interaction. As a result, they are unable to take into consideration an investor’s personal goals, such as the lifestyle they want to live in retirement, or factor in other aspects of wealth management, such as life insurance or tax planning. While this may not be an issue for millennials who are just getting started, it’s a significant concern for the mass affluent baby boomers who have accumulated anywhere from $250,000-$2 million in savings and have complex financial situations. While technology can be an important enabler, human advice is a necessary component of personalized wealth management. Several companies are therefore using software to help the advisor be more efficient; think providing the “Moneyball” approach to the advisor channel. More recently, goals-based wealth management is gaining in popularity as it is deemed to be a superior way of connecting a client’s goals with their investment strategy. This approach goes deeper than applying algorithms to trade and re-balance an investment model. True goals-based software requires greater sophistication, as it must be able to simulate infinite life scenarios, analyzing goal values and investment outcomes. Goals-based engagement adds a personal touch, and the underlying software empowers the advisor to help navigate their clients through myriad life events and market outcomes. By bringing together a technology-driven model and a human-advice element, wealth managers are able to offer a more holistic approach to wealth management. For example, an investor’s life goal might be to retire in Florida in a condo along the intercoastal, but also have enough money to travel and help put their grandchildren through college — all of this while taking advantage of every possible tax benefit. To assure the investor will be able to achieve their goals, the unique algorithms of the technology-driven component are necessary to process the many complex calculations, while the personal touch of a trusted advisor is needed to help guide the investor and offer firsthand support when either the capital markets or life events inevitably require a shift in strategy. One firm specializing in a goals-based wealth management approach is . Wealthcare was founded in 1999 and is widely credited as having pioneered this style of investment advice. Using a patented process, the firm is able to provide a clear picture of an investor’s financial situation at any point along their financial journey. Additionally, the process automatically signals to the advisor when it’s time to update the plan or make mid-course corrections. Wealthcare’s simulation technology is able to analyze thousands of randomized market outcomes, testing best and worst case scenarios and reporting back an accurate confidence score to the client. At any single point in time, the client knows where they stand as it relates to accomplishing his/her life goals. When a change in strategy is required, a goals-based advisor can then sit down with their client and recommend steps to be taken to keep their client’s plan on track. While this goals-based model has been around for quite some time, industry adoption has only recently begun to gather momentum. However, faster adoption may be at hand thanks to a new rule being proposed by the Department of Labor (DOL), which will demand a powerful technology-driven platform geared toward cost-effectiveness and compliance adherence. Under the new DOL regulation, advisors will be held to a fiduciary standard, thus dramatically changing the rules of client engagement. As currently written, the proposed rule states, “any individual receiving compensation for providing advice that is individualized or specifically directed to a particular plan sponsor (e.g. an employer with a retirement plan), plan participant, or IRA owner for consideration in making a retirement investment decision is a .” So, how exactly will this impact the industry, and, specifically, technology-driven wealth management? If the rule takes effect, there will be a new fiduciary standard for advisors to ensure that investments made within an investor’s retirement account align with their retirement goals. This could have a major impact on brokers/dealers and advisors. The new fiduciary standard goes far beyond the current “suitability standard” by which broker/dealers are governed. A suitability standard simply states that the investment professional must consider the client’s risk tolerance, objectives and time horizons in recommending an investment, but allows the investment professional to also recommend investments that may pay more lucrative commissions to them. Suitability is a very fuzzy standard, as most clients don’t really know their risk threshold (until after it’s too late) and does not address transparency or fee disclosures. The fiduciary standard, in contrast, requires full transparency, and mandates that advisors or brokers act in the best interest of the client. Helping clients establish, monitor and achieve their life goals for a fair and fully disclosed fee is as fiduciary-compliant as it gets. Moreover, with a goals-based approach in place, the underlying software provides an audit trail for compliance, as well as a compass to ensure an investor’s wealth continues to be managed appropriately and to the benefit of the client. Software automation will provide the advisor with auditability and traceability, and will reduce the overall potential liability if the investment strategy matches the client’s plan. In summary, technology-driven wealth management will continue to reshape the industry. Whether through millennials tapping robo-advisors, mass affluent baby boomers seeking a more personal touch or the coming changes in industry rules and regulations. We are still in the early innings of a major industry transformation with technology and regulatory changes acting as dual driving forces. It’s going to be very interesting to watch it all play out. At the end of the day, investors are going to be the biggest winners.
|
Researchers are using Land Cruisers in the Outback as a wireless network
|
Stefan Etienne
| 2,016
| 5
| 12
|
Communication in the harsh, uninhabited and undeveloped environment of the Australian Outback is a huge challenge. That’s why Paul Gardner-Stephen, a senior lecturer at Flinders University in Adelaide, has partnered with Toyota and the communications and advertising agency . The three partners are building a literal mobile network, using fleets of Toyota Land Cruisers equipped with Wi-Fi devices attached to their windshields. The devices provide a signal effective range of 25 km (15.5 miles). Using Wi-Fi, UHF and Delay Tolerant Networking (DTN), the devices can send emergency messages or geo-tagged info between vehicles in the Outback, which are then relayed to the rest of the world. It’s a potentially handy tool in the event of a natural disaster; or for keeping in touch with crews that are out of range and need a Wi-Fi network for assistance ( ). Currently, a fleet of 10 SUVs are being used in the Flinders Ranges of Southern Australia, where the tech will not only be stress-tested but refined for use in more areas, including some possible commercial use. [youtube https://www.youtube.com/watch?v=_K9wmGYBqRI] It’s definitely a smart take on addressing a connectivity problem in one of the least-densely populated areas of the world.
|
Slay squad goals, not just tasks, with Lattice
|
Josh Constine
| 2,016
| 5
| 12
|
You know what to do at work today, but do you know why? Goals. Having clear goals is critical to keeping productivity and morale high on any team. Whether it’s a launch date, level of traction, sales quota or hiring objective, teams perform better with well-defined motives. weaves goals into your workflow. Built by Y Combinator president Sam Altman’s brother Jack, . There’s already plenty of task management software out there, from Asana to Jira to Salesforce to Zendesk. But these focus on the “nitty-gritty what,” not the “why we’re doing this.” Goal management could become another feature of these tools, but investors are betting on Lattice becoming a separate layer that cooperates. Lattice founders Jack Altman and Eric Koslow (from left) Altman was formerly the VP of biz dev at Teespring as it grew from 10 to 500 employees. That rocket ship taught him the importance of company-wide goal-setting and the chaos that can ensue without it. “The number one thing I tell startups to do as they scale is make sure the entire company is clear on the mission and goals,” Sam Altman explains. “This is critical — if companies can get this right, things will move in the right direction, and if they don’t, no amount of management seems to fix it.” Teams try emails, spreadsheets, PowerPoints, meetings and end-of-quarter heckling to try to keep members on track. Jack thought they deserved dedicated software, so he set off with Eric Koslow to build Lattice. The Lattice web app (mobile app coming) lets customers lay out their org chart, set team- and company-wide goals and have employees and managers fill in progress over time. The focus is on transparency, the idea being that there is less inefficient one-to-one communication if everybody can instantly see what their colleagues are driving toward. Investor Keith Rabois explains, “If you ask anyone at a well-run organization what the company’s top goals are, not only can they tell you what they are, but they can tell you exactly how their own work feeds into them. Lattice is designed to help more companies run like this.” Lattice is meant to be used on a weekly basis or whenever teams need a reminder of their mission. The app can build weekly digests of company progress, though there’s also an activity feed for immediate status checks. The question is whether there’s room for Lattice. Big HR systems like Workday, SuccessFactors, and BetterWorks already have goal tracking features, though they’re a bit buried and clumsy. And task management systems seem well-poised to integrate higher-level goals, even if Jack says Lattice will work “in conjunction” with them. It’s hard not to see Lattice facing off against task-focused software like Asana… which is also funded by Jack’s brother Sam… and lets you track co-workers and collaborate with them… and was built by two guys inspired by management troubles at a big company they helped grow (Facebook). How many ~$5 per seat progress-monitoring SaaS subscriptions will companies pay for? [Disclosure: Asana’s founders are friends of mine.] What’s the true enemy, though, is the status quo: email. A constantly churning feed is not the way to stay focused on overarching goals. Lattice will have to trick people into two uncomfortable activities: changing behavior and tracking their progress — or lack thereof. It will take some clever design to stay sticky, but the benefit is clear. Before you strain yourself hiking, you need a compass.
|
Disney Research uses RFID tags to create powerless, low-cost interactive controllers
|
Brian Heater
| 2,016
| 5
| 12
|
The team at is up to its fun old tricks, this time finding some new uses for off-the-shelf RFID tags. Along with , Disney’s laboratory wing has , making it possible to use the tags to turn cheap objects into simple wireless interactive controls that don’t require battery power. The RapID (pronounced “rapid”) system could lead to all manner of inexpensively produced interactive toys. Such functionality could also be incorporated into smart books with relatively little expense. The system unlocks some interesting potential uses for the passive RFID system, which relies on the power of an external reader. Such functionality results in high latency and inaccurate tracking. The RapID framework reduces the lag time from two seconds to a far more workable 200 milliseconds. According to Disney, “Our approach couples a probabilistic filtering layer with a monte-carlo-sampling-based interaction layer, preserving uncertainty in tag reads until they can be resolved in the context of interactions. This allows designers’ code to reason about inputs at a high level.” The system might not have the same effectiveness with more complex gaming scenarios, but certainly seems to do the trick with the tic-tac-toe and Pong games demoed in the video.
|
Astronomers announce largest batch of new planets ever discovered
|
Emily Calandrelli
| 2,016
| 5
| 12
|
NASA has that astronomers working with the Kepler space telescope have verified 1,284 new planets — more than doubling the number of previously confirmed planets from Kepler. In fact, this new batch was the single largest group of new validated planets to date. Astronomers used statistical analysis on a planet catalog of 4,302 planets gathered by the Kepler space telescope in July, 2015. Out of those candidates, astronomers found that 1,284 were more than 99 percent likely to meet the qualification of a “planet.” This means that, upon further analysis, it may be found that a few of those newly confirmed planets may end up not being planets after all. Even so, it’s an excitingly large number of mostly, probably planets to be confirmed all at once. Timeline of exoplanet discovery / Chart courtesy of NASA With further analysis, even more planets may be confirmed from Kepler’s July 2015 planet catalog. NASA stated there were 1,327 candidates that are “more likely than not to be actual planets,” but they didn’t meet the 99 percent threshold and require additional study. The rest of the candidates in the 4,302 candidate batch are “more likely to be some other astrophysical phenomena” or were already verified as planets by other techniques. Among the 1,284 confirmed planets, there are 550 that have the right size to potentially be rocky worlds like the Earth. Now this is where the results get pretty exciting. Nine of the new potentially rocky worlds orbit within their star’s “habitable zone,” which is the zone in which a planet could have the right surface temperature to harbor liquid water — a necessary ingredient for life as we know it. Habitable Zone, also known as the Goldilocks Zone / Illustration courtesy of NASA With the addition of these nine special planets, we have 21 total confirmed exoplanets (planets outside of our solar system) that are likely to be rocky worlds and live in the habitable zone in their solar system. Illustration of the Kepler space telescope / Image courtesy of NASA The scientific community is particularly excited about this “habitable” category of exoplanets because they’re potentially the best locations to search for life. Before Kepler, which launched in March of 2009, astronomers didn’t know how many common planets were in the universe, let alone Earth-like planets. “Before the Kepler space telescope launched, we did not know whether exoplanets were rare or common in the galaxy. Thanks to Kepler and the research community, we now know there could be more planets than stars.” Paul Hertz, Astrophysics Division director at NASA Headquarters. The discovery of new worlds in our universe is tied closely with the search for extraterrestrial life. Now that astronomers have confirmed that exoplanets are likely abundant throughout the universe, the potential for alien life seems even more probable. With new space telescopes coming online in the next couple of years, astronomers will have better tools and data to continue the search for habitable Earth-like planets and extraterrestrial life. Kepler monitored 150,000 stars in a single batch of sky searching for Earth-size or smaller planets. One of the of the Kepler mission was simply to determine how common it was for stars to have rocky planets like the Earth orbiting them. NASA’s next big exoplanet-hunting telescope, the , which is scheduled to launch in 2018, will build on knowledge gained from Kepler and search for Earth-like planets around 200,000 stars that are even closer to Earth. And later that same year, when the is launched, astronomers will be able to study the atmospheres of habitable exoplanets and search for chemical signatures that indicate that life may be living on the surface. Within a few years, the scientific community has gone from postulating about the frequency of planets in the universe to putting the technology and plans in place to search for life on confirmed Earth-like worlds. While there’s much more work to be done to truly understand these new planets, it’s an exciting time to study the universe, especially for astronomers, planetary scientists and those hoping to answer the question, “Are we alone?”
|
Clarifying the uses of artificial intelligence in the enterprise
|
Michael Schmidt
| 2,016
| 5
| 12
|
Artificial intelligence. It’s dominating headlines with the promise of self-driving cars and virtual assistants becoming more real every day. But despite all the talk around AI, no one seems to really understand what it is or how companies can use it. Is AI the computer that competed on ? Or Johnny 5 from the movie ? Will machines really take our jobs? As data volumes surge and analytic engines become more mature, has technology finally caught up with the hype? Artificial intelligence does in fact encompass elements of all these things, but there’s been increasing market confusion around what it is and how businesses can successfully use it. The efforts of and industry development around AI are exciting, but it’s important to accelerate understanding of the topic and terminology. With this in mind, here’s a primer on artificial intelligence and what it means for business. Many people think of AI as the blending of humans and machines. They’re not far off, but AI is an incredibly broad term — more of an umbrella term, really — that simply means making computers act intelligently. It is one of the major fields of study in computer science and encompasses subfields such as robotics, machine learning, expert systems, general intelligence and natural language processing. Apple’s Siri, Google’s self-driving cars and Facebook’s image recognition software are standard examples of AI. But it’s much broader than that. AI also powers product pricing on Amazon, movie recommendations on Netflix, predictive maintenance for machinery and fraud detection for your credit card. While these applications are all powered very differently and achieve different goals, they all roll up into the umbrella term of artificial intelligence. From a business perspective, companies wouldn’t simply “buy” an AI solution. Rather, they would likely leverage one or more of the subfields of AI and buy software packages like R, Python, SAS, and MATLAB for statistical analysis. But new technology is pushing beyond traditional statistics, and machines are acting more intelligently than ever — they’re not just doing the analysis, machines are now finding patterns in data and figuring out how systems “work”… often without any human intervention. Let me stop here for a quick, yet important, PSA — neither artificial intelligence nor machines will replace all of our jobs. This is perhaps the biggest misconception about AI. Everything under the AI umbrella — including machine intelligence and machine learning — is data-driven, but requires human expertise to apply answers and discoveries to solve problems. AI will allow people to do new and interesting things that have never been done before. That’s the fun part, so let’s take a look. Early on, AI was focused on expert systems: establishing if-then rules to mimic human knowledge and decision-making. Expert systems fell out of favor because they didn’t leverage data, didn’t learn from data and didn’t scale. They were entirely limited by the programming and cerebral capacity of the people who created them. Today, machine learning has replaced expert systems. Machine learning refers to the statistical arm of AI. Most of the people or companies leveraging “AI” are referring to machine learning, not doomsday robots. The focus of machine learning is on programming algorithms to learn from data, complete tasks and make predictions with an emphasis on high statistical accuracy. It is not used for the discovery or interpretation of data — this is important to know, and something we’ll cover more in-depth later. Machine learning algorithms can be developed, run and tuned with libraries from software packages like SAS, R and Python. Data scientists and statistical analysts typically work in these programming languages, ultimately applying the resulting algorithms to enterprise applications like sales forecasting, email spam filtering and determining where the next hurricane is likely to develop. Because of machine learning’s widespread applications, there exists a plethora of tools that empower its implementation. Yet the “rules” and predictions uncovered by machine learning algorithms are still unable to solve many business problems on their own. Without data scientists, businesses have trouble interpreting the algorithms and developing an understanding of the . If we’re a Fortune 500 retailer, do we care exclusively about predicting sales, or are we equally concerned with that number? What are all the variables and relationships at play, and what can we change to improve that outcome? How much of our revenue can be chalked up to weather patterns or marketing spend? Are we stocking the correct amounts of each product, or should we optimize for impending tastes and trends? If we’re an engine manufacturer, is it useful to merely predict that the assembly line will produce a faulty component 1 percent of the time, or is it more helpful to be able to understand all the knobs and levers that contribute to the failure so we can actually the future and our processes? Welcome to the next phase of AI. Machine intelligence is the newest subfield of AI, focused on learning and interpretation of data. It’s a natural progression of machine learning, but takes it a step further. One of the shortcomings of machine learning is that machines are learning but not conveying to us what they’re finding from the data. To make data valuable, we need to be able to understand it and explain it; only then can we connect the dots and apply it back to the business. This is machine intelligence — the interpretation and understanding of data — and why it’s so important. IBM had one of the first attempts at machine intelligence. Watson, IBM’s Jeopardy-aceing robot, uses natural language processing to interact with and process data, by converting speech to a scalable search through the World Wide Web. Watson can listen to anyone, interpret the information then search its database for a response to a question that somewhere, at some time, has already been answered. Where Watson falls short is its inability to infer new ideas or answers. Similarly, other services like Microsoft and Amazon provide platforms for running machine learning algorithms, but do not facilitate interpretation of results. What if we could take this a step further and have machines discover answers that are not yet known? Machine intelligence has the ability to not just uncover answers, but to understand and interpret what it has learned. Machine intelligence is the next exciting progression of artificial intelligence. Whereas will accurately predict that your electric bill will increase next month, will accurately predict that your electric bill will increase next month — and tell you why: your travel schedule will be light, the weather will be hotter than usual and your air conditioner has been deteriorating. Machine intelligence teaches back to the human the reasons things happen or will happen, arming users with the ability to make quick and justified changes in strategy. Machine intelligence, not “big data,” offers the “actionable answers” businesses need. Unfortunately, some people have been exploiting the ambiguity around artificial intelligence. AI, in and of itself, is a relatively basic, high-level idea that machines can be programmed to act “intelligently.” Machine learning dives into the idea that machines can actually learn without explicitly being programmed. Finally, machine intelligence exhibits the ability to not only learn from data, but to actually clearly articulate answers and discover . That is, machine intelligence is the first instance of machines teaching the human and relaying brand new discoveries automatically. This is an exciting time, and we’re only on the cusp of what’s to come in AI. Machine learning has been the panacea for decades. Machine intelligence is the next step in the progression, allowing people (and companies) to understand why things happen and what to change to generate their desired outcome. Intelligence has never been sexier.
|
Walmart begins testing 2-day shipping service to take on Amazon Prime
|
Sarah Perez
| 2,016
| 5
| 12
|
is making a notable to change to , its three-day shipping service and answer to Amazon Prime . The company says this morning it will move from three-day delivery to two-day delivery — which makes the service far more competitive with Amazon’s similarly structured annual shipping program. This is an interesting move on Walmart’s part, given that its original idea with ShippingPass was to compete by offering a delivery service that’s faster than usual, but not quite as speedy as Amazon Prime. However, the advantage was that Walmart would undercut the cost of Prime in exchange for this delay. Walmart’s ShippingPass customers paid half the price of an annual Prime Membership — ShippingPass is $49/year (down from $50/year previously), versus Amazon’s $99/year program — and only had to wait one more day for their delivery. Now, they only have to wait two days — the same as with Prime. While Prime today is a membership program with a range of benefits, including access to free music, books, TV shows and movies, Walmart is focusing only on delivery with its ShippingPass service. There are no other features. But it’s not the lack of benefits that’s keeping the price down, says Walmart. Instead, the company explains that it’s able to offer “faster and more affordable shipping” because it has a unique fulfillment network that includes new, large-scale fulfillment centers, stores, distribution centers and a transportation network. Walmart’s shipping service today offers subscribers access to more than a million items, including those most frequently purchased on Walmart’s website. The website offers a wider selection of around 7 million items. That suggests Walmart may eventually choose to expand the selection for ShippingPass customers in the future. ShippingPass items are delivered to customers’ doors in two days or less. There are also no minimum requirements to use ShippingPass, and free returns are supported, as well. Walmart has not yet disclosed any details about ShippingPass, like number of sign-ups, sales to date or its margins and impact to Walmart.com’s revenue or bottom line. ShippingPass isn’t broadly available at this time, but interested customers can sign up to join a waitlist for the pilot program.
|
Fender cashes in its cool with consumer in-ear monitor headphones
|
Josh Constine
| 2,016
| 5
| 12
|
Brian Heater, TechCrunch’s new hardware editor, gave the high-end Fender FXA6s a spin, and said “They sound really good. They’re accurate to the source material and the levels are clear and even. One of the best set of in-ear headphones I’ve tested in a while. They’re also comfortable and stay in the ear. Custom headphones would fit better, of course, but Fender’s done a good job for a one-size-fits-all pair.” [Update July 21: After a few months of using them, I found the Fender headphones a bit problematic. The design with the wrap-around-the-ear wire hooks was cumbersome to put on. These wire hooks also act like grappling hooks, making them easily get stuck on stuff in you bag. The elastomer tips pick up dirt easily, and the headphones don’t keep out sound like plane noises very well. In the end, they don’t quite work well as consumer headphones at this price point.] Now the question will be whether Fender can transfer its trust to a new product line, and convince the average consumer that these headphones aren’t just for pros.
|
Virtual reality — from the living room to the classroom
|
David Baszucki
| 2,016
| 5
| 12
|
There is a world of enormous educational potential with video games. Highly acclaimed simulation and tutorial games such as and have been continuously employed in elementary schools across the country, but the most common software used in education today is the web, word processors and spreadsheets. Recent research from consulting firm McKinsey & Company shows that an astounding still “lack the digital instructional resources they need.” With the , we are in the midst of a massive paradigm shift in how educators will perceive and incorporate video gaming experiences. Video games tend to be simulations of the real world, but given the development costs associated with quality 3D games, the range of experiences available to educators has been limited to profitable genres. Quality educational experiences are not available with the depth of content that one might find on a video platform such as YouTube. This is all about to change with VR. There is an educational revolution waiting to happen, and captivating, deeply engaging immersive experiences will be at the heart of it all. One can glean early signs of the educational experiences we will see in VR from user-generated content platforms. These platforms support immersive experiences that go beyond games. Take for example, which allows students to explore the Great Pyramids of Giza, understand basic electrical engineering principles and more, all in an imaginative, three-dimensional environment. On our platform, content creators can build an experience that shows students how birds survive in their ecosystem ( ), or how complex it is to manage a restaurant ( ). And hundreds of academic institutions are using Second Life to give students the opportunity to visit , head to or play in . These UGC experiences are immersive, but confined to the small screen. As we leap to VR, the level of immersion is much greater. Experiencing a game in VR brings the game world to life in a way that can be compared to going to Disneyland rather than watching movies about Disneyland. Players are finally “in” the game, surrounded by life-sized characters. Objects in the game are bigger and closer and feel as if they are within reach. A good VR experience can make the participant believe they are actually in the simulation. With this comes deeper emotions. One can feel more scared, more surprised and more connected to the experience. VR allows students to learn by doing, rather than hearing. Students can live in a Martian colony, complete with small spaces, interactive doors and oxygen tanks. Such an experience may reinvigorate a young student’s dream more than a film can. Students can participate in immersive explorations of the Amazon rainforest, allowing budding biologists to empathize with the richness and fragility of the jungle. Junior historians can attend Martin Luther King, Jr.’s “I Have a Dream” speech, surrounded by thousands of people and feeling the scale of the National Mall. Students will more and more immerse themselves in simulations of historical events, explore previously inaccessible places with their friends or visualize abstract concepts without ever having to step foot outside their classroom. Early demonstrations of immersive VR experiences are already venturing into the hard sciences, ranging from anatomy and biology to the social sciences, like history and literature. Titles such as , , and are taking students on mesmerizing excursions into places they may have never dreamed of seeing in real life. These entertaining “virtual field trips” are giving students an opportunity to interact with the subject material first-hand. In addition, simulated classrooms offer the possibility of presenting undergraduates with virtual lessons or quizzes in the comfort of their own home. The VR education phenomenon is growing, with programs such as and , as well as with big corporations like Google. In September 2015, Google launched its , which provides teachers all the equipment needed to take their students on enthralling globetrotting adventures on land, under the sea or even into outer space with the power of virtual reality. A number of reports claim that schools are responding favorably to Expedition, with some students expressing that implementing VR would be a “ ” or that they would be interested in visiting these virtual places in real life. VR has long been heralded as a potential influential tool in academia. A 1998 study, , published by Christine Youngblut of the Institute for Defense Analyses, found that when compared to traditional classroom environments, students of different backgrounds, ages and across a variety of different experiments generally had a positive experience with virtual reality. Moreover, their virtual lessons were evaluated as either being superior or on par with human instructors, and motivation among students was “extremely high.” There is a coming new age in education, where students can learn by experience. This will be driven by virtual reality technology, and powered by UGC content. We are just beginning to see huge breakthroughs in the imaginative educational content that developers are creating for kids and young teenagers alike. The VR landscape is evolving — all the way from the living room to the classroom.
|
The Roll helps you find the best photos on your phone
|
Haje Jan Kamps
| 2,016
| 5
| 12
|
Wondering which of the tens of thousands of photos on your phone are worth keeping or sharing may be a thing of the past, thanks to , a brand new app from . analyzes your camera roll and uses computer vision to tag your images and rank your photos by how good they are. The Roll’s keywording is uncannily good “The Roll is there to replace your phone’s camera roll,” said Florian Meissner, EyeEm’s CEO, as he showed me the technology in action on his own phone containing tens of thousands of photographs. The Roll is an incredibly impressive piece of tech, solving a problem that Apple has been wrestling with for years. The existing iOS camera roll attempt to group images by date and geolocation, but that solves only part of what eager photographers want. In addition to taking into account the location and date of the photos, The Roll uses computer vision to analyze and tag images, making it easy to find the photo you were looking for. On The Roll, images are tagged and grouped by topics, location and events, with the best shot from each category highlighted. Visually similar photos are stacked, placing the image with the highest aesthetic score on top. In the detailed view, you’ll see the score, automatically added keywords and camera information relevant to camera buffs, such as aperture, shutter speed and ISO information. “When you first launch the app,” Meissner explains, “small thumbnails of the images are uploaded to our server. The images are then analyzed and tagged, and the information is downloaded to the app. Our algorithm knows more than 20,000 keywords, and adds two scores to the image: One is a quality score, and the other is a commercial value score.” With a quality score of 98 percent, this photo I took in Tuscany five years ago is the best photo I have on my phone. Which isn’t a bad call; personally, I prefer the edited version, which has a bit more contrast and saturation and is cropped slightly, but that version received a 92 percent score. The quality score is exposed to the end user, but of course this is more than an exercise in just showing you which of your images are good: The app is a play to help users filter and select photographs for them to sell on the rapidly growing . In addition to helping you tag and find images, the app can help you clear out your camera roll by identifying similar photographs, showing you the best ones, and removing inferior near-duplicates. “The goal is to categorize the trillions of images we take today and surface the best content,” said EyeEm co-founder and CTO Ramzi Rizk of the company’s technology, which also can be experienced on its website, in the web uploading tool for its photographic marketplace. “The Roll is bringing this technology to your very own camera roll.” I’ve been playing with The Roll for a while, and the technology really is spookily good. The tagging is comprehensive and accurate; a quick search for “cat” and “blue,” for example, shows all the images containing various combinations of felines and the color blue. A search of “London” and “Sign” correctly showed me an image I remembered from a holiday many years ago. Without The Roll, finding that particular image on my phone would have taken half an hour of scrolling. The quality scoring is uncanny, as well. It isn’t yet perfect, and I don’t always agree with the cloud’s taste in photography, but it is extremely efficient at filtering out images that are of sub-optimal standard; photos that are too dark or suffer from blur are thrown right to the bottom of the list. It is also rather talented at surfacing the best images from a particular event or keyword, which is a blessing for a snap-happy photographer such as myself. The real revolution here is not obvious, but technology like EyeEm Vision and The Roll will change how we use photography; it is no longer necessary to tag images to find them. To understand why this is important, look at the way people now use the world wide web: Five years ago, we had a huge, very carefully curated collection of bookmarks. At some point, all of that was replaced by Google — why bother keeping a list of links up-to-date, when all the pages you need are at your fingertips? Similarly, The Roll will give you unprecedented access and explorability of your own photographs. The only piece of the puzzle that’s missing is the desktop. The Roll is borderline magic, and if you have any interest in photography, you should absolutely try it out, but what I want is for this technology to be available on my computer. Tagging and organizing photos is the biggest chore in photography, and The Roll feels like a proof of concept teaser for what is to come. EyeEm, if you’re reading this: If you want to set the photography world on fire, release this tech as an Adobe Lightroom plug-in. of The Roll is launching today; an Android version is in the pipeline.
|
Google launches Gboard, an iOS keyboard that lets you search without a browser
|
Sarah Perez
| 2,016
| 5
| 12
|
Google this morning a new application for iOS devices called Gboard that puts the power of Google search directly into your mobile device’s keyboard. This keyboard had been earlier this year, and it appears the original reports were accurate. Not only does the app allow for an easy way to use Google search, it also offers swipe-based typing and access to GIFs, as previously reported. And it includes easy access to common keyboard functions, like emojis and word predictions. Of course, the most interesting feature of this keyboard is its direct integration with Google’s services. By tapping the included “G” icon, you’re able to immediately search Google without exiting your keyboard and launching a browser or the Google app. This allows you to easily search for things like flight times, news articles, restaurant and business listings, weather and more right from your keyboard, then just tap to paste that information into your chat. (You’ll need to give the app access to your location the first time you launch this feature.) This information is presented to you at the bottom of the screen in a card-style layout, where each listing has its own card. When tapped, the information from the card immediately appears in your conversation, email, notepad or wherever else may be on your phone at the time. This is pasted as hyperlinked text, so you can do things like pull up the listing in Google Maps, phone a business or perform a web search for the item in question, among other things. [gallery ids="1321272,1321273,1321219,1321216"] However, you can also just tap on the “Paste” button from iOS’s “Edit” menu in order to copy the actual card. While this is not hyperlinked (it’s an image), it presents the information in an attractive format, along with the included details, whether that’s a business’s open hours and address, today’s temperature, flight times or whatever else you may have searched. “We wanted to bring the best of Google to Gboard, so you’ll see Maps, Translate, image and video search, News and others,” says Rajan Patel, the head of the product team that developed . “Initially, Gboard will not surface any information specific to you,” he added, hinting that a personalized keyboard is in the works for the future. In addition, the app supports GIF search. Google partnered with Riffsy to improve predefined GIF categories, but GIF search is powered by Google search and will surface GIFs from a number of sources. To find a relevant GIF, you tap the emoji icon on your keyboard (the smiley face icon). This lets you access common emojis, but a button at the bottom lets you switch over to the GIF search section instead. From here, you’re offered a selection of reaction GIF categories, like “high five,” “thumbs up,” “hair flip,” “mic drop,” “shrug,” etc. You can also search for a GIF using keywords. When you find one you like, you tap it and it automatically copies so you can paste it into the conversation. Google made an interesting improvement to using emojis, too. Instead of having to manually scroll through the various emoji screens, you can search for a term like “dance” or “wine” and the app will return the matching emoji. The new keyboard application also allows you to touch type or swipe, depending on your personal preference. To glide type, you just drag one finger between letters. You don’t even have to hit the spacebar. This makes one-handed typing easier — and that’s a feature that Microsoft has been promoting with its popular . One big drawback to using Google’s keyboard over Apple’s default is that it can’t include a microphone for dictation — Apple doesn’t allow any third-party keyboard to offer this, in fact. That means you can’t use “OK Google” or even Siri or Apple’s own dictation mic, for example. Gboard is an important launch for Google as today’s consumers spend the majority of the . According to , mobile device users this year will spend 3 hours 15 minutes per day using apps versus just 51 minutes using the browser. Of course, allowing Google to become deeply integrated with your keyboard raises some questions around data retention and privacy. The app allows you to clear your search history and your personal dictionary. Searches on Gboard are sent to Google, as well as anonymous statistics to help Google diagnose problems when the app crashes and to let it know which features are used most often. All other types data is stored only on your device, says Google. This data is not accessible by Google or by any apps other than Gboard. The new app is available today in English only and is . More languages will arrive in the future, Google notes. Update: Post updated with more information about Google’s data retention policies.
|
The skinny bundle will be neither skinny nor bundled — but it will be great
|
Mike Hoefflinger
| 2,016
| 5
| 12
|
In-home video entertainment is a big deal. globally in 2019 big. About $100 billion in North America big. Triple the revenue of apps and games big. No wonder, then, that it won’t get re-worked as easily or quickly as music did. The TV conglomerates are , forcing any potential providers of so-called “skinny bundles” (just the channels most of us care about delivered over-the-top) to take their channels, not just select ones. McKinsey Global Media Report, October 2015 But the sheer scale of video’s business opportunity is also why Apple’s — for now — on the skinny bundle is not the end of the story. After all, Marshall McLuhan foretold this change in 1962’s “The next medium, whatever it is … will include television as its content, not as its environment.” The skinny bundle happen, but it won’t be very skinny and it won’t feel like a bundle. Say hello to the $99 jukebot. Why so much talk about skinny bundles? For tens of millions in the U.S. alone, our video entertainment is spread across as many as five different apps on at least two different hardware ecosystems and costs us about $120-140 per month: There is no single interface to discover across all this content. It’s like the web before search. We can significantly improve the usability of all this wring some efficiencies out of the distributed pricing. We don’t know — and don’t care — which studio made “Superbad,” which record label Drake is on or who published Harry Potter’s eighth book. TV networks and SVOD providers are much the same: We care about the show we want to — and the shows we’d enjoy watching but aren’t yet — not whether they’re on NBC, AMC, HBO or Netflix. Anything that shrinks the distance between us and those shows will be rapidly embraced. But, with (double the count of only five years earlier), we have a problem: There’s too much television on television and interfaces like the gridded guides, search or even stores (à la Apple iTunes and Google Play) are not getting it done for discovery any longer. If we bring the content from all these sources into one and replace the current methods of discovery with a “personal shopper” with a messaging/GUI mechanic, we have a “jukebot.” With the vast majority of what you’re interested in finally under one interface roof, the old ways will be superseded by something that will feel more like a back-and-forth conversation with a friend — supported by all available data from the likes of IMDb, Rotten Tomatoes, Facebook and Twitter — with seamless cloud-based, time-shifting, ad-skipping technology underneath it all. This bot for your television jukebox won’t just be easier to use, it’ll be less expensive, too. Here’s how the likes of Apple, Google, Sony (whose Vue service has made some of the most headway on this front) and possibly Amazon could go shopping to aggregate the content — and related live and on-demand rights — for $99: That $30 monthly savings is enough to pay for things like your , creating a better- -cheaper service that will sell easily to the first 20-30 million consumers, with some segmentation and tiering to 50 million — which would mark a cross-over in subscriptions from MVPDs to SVOD. When that happens — as we’ve seen so many times in technology — the new normal will only gain further velocity. If things go the way of the $99 jukebot, here’s some tea-leaf readings for the various players: The $99 jukebot is the alternative consumers deserve. Whoever gets there first moves themselves into the center of the most interesting and lucrative content business. Welcome, king.
|
The $799 VUZE virtual reality camera goes up for pre-order
|
Brian Heater
| 2,016
| 5
| 12
|
Virtual reality is expensive. High-end headsets like the Oculus Rift and HTC Vive will run you $599 and $799 — and the cameras required to capture the content runs several times that. At $799, the isn’t particularly cheap by most of our standards, but it’s certainly enough to qualify the virtual reality camera as “affordable,” particularly when contrasted with competition like Nokia and Facebook, which run in the tens of thousands of dollars. Developed by Israeli imagining company , the portable camera features eight cameras, positioned in pairs around the perimeter of its flat, square head. When combined, they’re able to shoot 4K 360-degree virtual reality video. The low price point is clearly a play at consumers — who have thus far had limited access to comparable technology — as is the camera’s point-and-shoot interface, which promises shooting with a single click. The camera comes bundled with a combo selfie-stick/tripod, carrying case and VUZE Studio, the company’s software suite that stitches together the individual cameras and offers filter and editing capabilities. The $799 VUZE camera officially opens up for pre-order today. It’s set to start shipping in October. As for who the camera is aimed at, Bin-Nu answers thusly, “
|
Why Britain is beating the U.S. at financial innovation
|
Jeff Lynn
| 2,016
| 5
| 13
| |
Trends in angel investing
|
Sam Bernards
| 2,016
| 5
| 13
|
“Tell me about your cap table?” I asked the founder of an early-stage startup. He was clearly passionate about his business, and had assembled a top-notch team to help him achieve his ambitious vision. But the grit and determination that helped him overcome the challenges of his current and former startups seemed to falter a bit as he considered his response. He had a dirty cap table and he knew it. With some reservation he said, “Our initial angel investors own 60 percent of the company. I have 35 percent and the rest is split up between the team.” He then went on to explain that their aggressive valuation was based on the growth since the last round of angel investment, in which the above-market revenue multiple the angels had chosen set the precedent. I soon found out that this team had struggled to raise money from the other venture capital firms with which we like to syndicate — because they, too, were dissuaded from the conditions that the angel investors had created: small runway, fragmented cap table, high valuation and little strategic support. He described with some frustration how he felt stuck, and that his “angels” no longer fit the heavenly metaphor; rather, the situation felt more like hell. Compare and contrast this situation with a scenario that has become familiar because of its frequency within our portfolio of Peak Ventures-backed companies, in which an angel investor known to our team (and often an LP in our fund) introduces us to the founders of a company that he or she has backed (usually at an appropriate valuation for the stage) and we lead the next round. In this situation, the company has what I think is the best of both worlds: A passionate, helpful angel investor supported by the rocket fuel and capital connections of an institutional investor. The closer angel investors are to institutional funds, the better they can construct the terms of their deals to entice these funds to come onboard. As I reflected on these experiences, I wondered what was going on in the broader landscape of angel investing. To my delight, I discovered that has just released the latest version of their , which studies angel investing across the United States. I found some of the trends interesting. Six years ago, angel rounds diluted founders by 25 percent, on average. Look at the yellow line in the chart below to see that this gently fell over the years to land at 20 percent. What does this mean for entrepreneurs who are aiming for massive growth, but need initial capital to get things started? Use 20 percent as a benchmark to keep your cap table clean. An angel shouldn’t own a majority of your company unless they come on in a dedicated, operational role. Also, an angel should be worth more than just their money — they should add measurable value to your business. You can see this in the chart above in the size of the blue bars and green dotted lines having a 67 percent growth in just one year. What is even more impressive to me, however, is the chart below, in which you see that the median valuation of angel-level deals is now the highest that the Halo Report has ever tracked. Reading between the lines, I believe there are two reasons for this: As a data point on angel groups rising in undercapitalized markets, check out the chart below showing the 12 most active angel groups. Notice how many of them are outside the traditional hubs of venture capital. So do angels make life heaven or hell for entrepreneurs? It seems from market trends that angels are becoming more sophisticated, more organized and more integrated into the capital ecosystem that supports early-stage startups. I think this is a good thing, especially when angel investors are tightly coupled with institutional investors.
|
The importance of black boxes in an autonomous automotive future
|
Kristen Hall-Geisler
| 2,016
| 5
| 13
|
Right now, if you get into a crash, it’s usually going to be a driver’s fault — a human driver. Parts can fail and tires can burst, but about , depending on which study you read. But in the not-too-distant future, that clear-cut cause will evaporate as more vehicles take over more driving tasks, creating more and more data as they do. And that data gets recorded in a black box. As of September 1, 2014, all newly manufactured passenger vehicles are required to have a black box on board, though the majority of cars had the technology on board well before the deadline. It’s officially known as an event data recorder (EDR), and it records on a loop metrics such as speed, brake application, air bag deployment and seat belt use from the car’s sensors. According to , an EDR grabs about 5 seconds of data before the crash and just 1 second after. As sensors proliferate on vehicles, they are gathering more information. Szabolcs Szakacsits, founder and CTO of , said the software his company provides for black boxes in autonomous cars records data from 16 sensors simultaneously, adding information on things like tire pressure, camera images, radar data and driver profiles. So in the case of a crash, the black box will know where you put your seat and what radio stations you like. “There’s a convergence going on,” Szakacsits said in an email interview. “As vehicles are becoming more autonomous, more and more sensors are integrated inside the car, which requires the black box to record, store, and read the data, like distance to the next vehicle, road marks, traffic signs, lights, people, and other objects surrounding the car.” Tuxera uses a Flash file system integrated with the EDR to make sure the data recorded is consistent, even if the power gets cut during the crash. “That is why the software components used in automotive applications need to be more robust,” Szakacsits said. “It is critical that the data is well kept and fail safe. You can’t afford to have frame drops from the , for example, because that is critical information in case of accidents.” In the future, responsibility for crashes will likely be clear-cut once again, when autonomous cars can drive themselves completely. The human passengers likely won’t be paying attention to the road at all, and the cars themselves are predicted to crash less often — at least, that’s the dream. But just in case the future isn’t so shiny, Tuxera is developing the black box software that will be available for self-driving consumer cars by 2020.
|
Fender’s first foray into headphones sounds great
|
Brian Heater
| 2,016
| 5
| 13
|
I was cynical at first. I mean, I still have some underlying (if not entirely well-founded) suspicions that is part of a bigger play to expand beyond its customary guitar and amp offerings. But if the FXA6 is, indeed, just the first step of some larger world domination plan, it’s an extremely solid one. I know I’m not the only one who had questions when the company announced its plans to get into the headphone business. After all, there’s certainly some precedent for respected brands reaching beyond their means by slapping a logo on some OEMed piece of hardware. On a phone call last week, CMO Evan Jones assured me that it was doing nothing of the sort. The company sees the product as not only a logical next step in its professional music product line, but a continuation of the audio work it’s invested in its amps over the years. The company has also made some new hires and expanded its operations in Nashville to accommodate that growth. That Fender insists on referring to the product as a “monitor” versus just plain old headphones is clearly an indication of its plans to market the product to professional musicians. On the call, however, Fender more or less used the words interchangeably. But all of that is secondary, really. The most important thing is, of course, how the product performs. And there’s definitely some good news on that front. The FXA6s sound stellar. The team the company assembled clearly knows a thing or two about building a pair of great sounding headphones — something purchasing high-end audio company might have had a thing or two to do with. The headphones are exceptionally clear, even at high volumes (sorry mom). They do a great job staying true to the source material, not leaning too heavily on the bass to compensate for shortcomings on the other frequencies (as many consumer headphone manufacturers have in the past). The FXA6 do a great job isolating sounds, particularly at higher bit rates, making it easy to pick out the different instrumentation, rather than the muddy jumble that one often encounters on cheaper sets. As I write this, I’m able to enjoy Carol Kaye’s bass work on from the orchestral cacophony around her. Fender’s done a nice job on the comfort side, as well. Of course, you’ll never get as snug a fit as you will on custom earphones, but the FXA6, when coupled with the right set of tips, fit nicely, the shells conforming to the contours of the outer ear. The company seems convinced that the headphones are more or less one-size-fits-all (well, 95-percent of all), but I certainly can’t promise that they’ll be quite as comfortable for all wearers. But when the fit is right, it also serves as passive ambient noise reduction, blocking out errant sounds by forming a tight seal around the ear. The braided cable is detachable from the monitors — a decided plus in a pricey pair of headphones. I can say from experience that the cable is pretty consistently the first part of any headphones to flake out on me, so it counts for a lot knowing that it can be swapped out should anything happen. It’s worth noting here, however, that there’s no mic on the cable that ships with the FXA6, so you won’t be using it to make any phone calls. Though I suppose that just comes with the territory for a pair of monitors aimed at an audience of musical professionals. That last bit brings us to the next key bit of information. At $400, the FXA6 are well out of the reach of most consumers. Of course, any user who has shopped around for a pair of Ultimate Ears will happily let you know that those monitors get even pricier. It’s hard to say if there’s enough of an open market for the FXA6s to really succeed at that price point. Fender is no doubt banking quite a bit on brand name recognition to help stake its claim in the market. At the very least, however, the company’s foray into a new space in decades doesn’t disappoint.
|
Robots won’t just take jobs, they’ll create them
|
Mynul Khan
| 2,016
| 5
| 13
|
and artificial intelligence have come a long way since a Roomba entered your home to vacuum your floor and Siri gave you advice on the best Italian restaurants in your parents’ neighborhood. Cars drive themselves. . A revolution is underway. According to a , half of American could be automated within the next two decades. The study identified transportation, logistics and administrative as the most vulnerable to automation. Others assert it is only a matter of time before replace teachers, travel agents, interpreters and a host of other professions. With the prospect of such disappearing, many futurists and economists are considering the possibility of a jobless future. Their predictions of what this would look like usually center around two scenarios: a dystopia where humans no longer have or incomes, leading to increased and , or a utopia where governments give incomes to their citizens, who will then be able to lead more . I think it’s time to look at this in a different way: in the workforce present an opportunity to stimulate job growth and new types of work. will not merely ’ also . While technology advances at an unprecedented rate, our era is not the first to undergo significant technological change. From the invention of the wheel to Gutenberg’s printing press, humans have innovated and adapted to new technologies throughout history. And for as long, there have been concerns about how new technologies would affect laborers. In each case, these technologies led to new industries and . The invention of the printing press in 1440 allowed the mass production of books, leading to to manufacture books, transport , market and sell . Print shops sprung up. The fall in printing costs led to newspapers. Yes, the printing press put scribes out of business, but new were soon developed to their place. For more recent examples, consider agriculture and textiles. In the 1800s, 80 percent of American were on farms. Today, . Yet as we know, the mechanization of agriculture didn’ ruin the economy. In fact, it continues today, as make farming . Around the same time, the textile industry underwent significant technological changes. With the Industrial Revolution came power looms and other mechanical equipment that reduced the need for labor in producing textiles. Afraid of losing their , Luddites, a group of textile workers and self-employed weavers, protested the use of such machines in England, even destroying and inciting a rebellion that required military force to suppress. The fact that calling someone a Luddite today is an insult shows how unfounded their concerns were. We need only look to our past for clues to our future. Yes, will do much of the work humans do today, impacting the human workforce and the type of work humans do. But as history shows us, that doesn’ mean there ’ be any work left for humans. The American labor force has weathered dramatic changes in the past two hundred years. It is resilient and adaptable. We can get a sense of what of the future will look like by looking at ’ weaknesses and humans’ strengths. do not yet have the ability to perform complex tasks like negotiation or persuading, and are not as proficient in generating new ideas as are at solving problems. This means requiring creativity, emotional intelligence and social skills are unlikely to be filled by any time soon. It’s likely our managers, nurses, artists and entrepreneurs will remain human. We all know how great it is when technology works — and how frustrating it is when it doesn’ . Even sophisticated technology companies haven’ eliminated their human customer support teams, because when something goes wrong, it is often a human who needs to fix it. There will always be a need for on-site, human labor and expertise when we deal with machines. will have glitches, need updates and require new parts. As we rely more and more on mechanized systems and automation, we will require more people with technical skills to maintain, replace, update and fix these systems and hardware. We see this starting already. IT departments have sprung into existence because of digital technologies. Network administrator, and web developer are job titles that didn’ exist 30 years ago. Technology has not only created departments and within companies, but created the need for entirely new companies and businesses. The demand for technical skills will only increase with an increase in automation: Someone needs to fix the robot when a part is faulty. Driverless cars will still require mechanics. New will be created in science, technology, engineering and mathematics (STEM) fields like nanotechnology and robotics. A 2011 found that one million industrial directly created nearly three million . Of the six countries examined in the study, five saw their unemployment rates go down as the number of used went up. This study showed job creation will extend beyond the STEM fields. The authors identified six industries where employment was likely to increase directly because of : automotive, electronics, renewable energy, skilled systems, robotics and food and beverage. Not everyone will need to be an engineer to find created by . We do not need to become modern Luddites, afraid of losing our work and place in society to . Rather, we can welcome , knowing will make our lives easier, as technology always does, and knowing that by their very existence, will new . I am looking forward to a future where stimulate job growth and exciting work we can’ even imagine today.
|
null |
Steve O'Hear
| 2,016
| 5
| 12
| null |
Why incident response plans fail
|
Susan Peterson
| 2,016
| 5
| 13
|
Following a cyber attack on critical infrastructure, emotions run high and the clock starts ticking. Suddenly what appears to be a well-structured incident response (IR) plan on paper can turn into a confusing “storming session” around who owns what. Rather than identifying, analyzing and eradicating the threat, organizations can easily become entangled in processes hindering response time and further endangering operations. The longer the “dwell time,” or the time an attacker remains within the system, the more damage the attacker can cause, whether it be data loss, impacts to operations or physical damage to assets. According to a done by the SANS Institute, 50 percent of organizations took two days or longer to detect breaches, and 7 percent didn’t know the length of an attacker’s dwell time. While many industrial organizations have an IR plan in place, very few run through a routine simulation exercise of this plan. Simulated exercises reveal various incorrect assumptions made throughout the IR process and identify gaping holes where there are missing contacts or protocols that are critical for a successful IR program. Additionally, organizations are beginning to see cybersecurity ramifications beyond damage to assets, data and reputation. has threatened to downgrade the credit ratings of banks that have poor cybersecurity, and more state and federal are being passed that require organizations to implement reasonable security safeguards. These “reasonable security safeguards” take into account IR plans and dwell time after incidents. It’s not enough for companies to have plans in place, they must demonstrate that their plans are effective in mitigating risks around cyber incidents. In a recent IR simulation held by an oil and gas company, I was able to participate in the capacity of an industrial controls supplier. The simulation challenged assumptions on roles and responsibilities. In the oil and gas industry, the supplier ecosystem is complex. Using an offshore project as an example, a company that provides a control system for an offshore platform may first sell the system to an engineering procurement and construction company. The control system may then be operated by a fuel services company and ultimately owned by an oil producer. The organizational complexity in upstream is massive. As a result, it’s critical for these organizations to break down any assumptions about IR, and both assign and confirm ownership as part of this process. While every assumption should be tested in an IR exercise, there are a few top considerations that must be transparent for IR to be truly effective. These include: When an incident occurs, key stakeholders want to be aware of what’s happening and how the situation is being addressed. Keeping executives in the know and managing expectations around the line of communication is an important part of an IR plan. There should be an assigned “incident captain” who can quickly alert the necessary parties and inform them of immediate next steps. This is particularly crucial when an incident impacts IT, field teams, multiple business units, global regions and suppliers. Time is of the essence, and a simulated exercise ensures the communication plan is clear, accurate and has the necessary contact information at the ready to bring awareness to all stakeholders. When it comes to managing suppliers in an IR plan, there are a number of questions or assumptions that should be verified during a simulated exercise. What role do your suppliers play in the event of an attack? Do they have a contractual agreement that outlines their role in IR and disclosure around cyber incidents? Do they install software that was purchased from another vendor? Do suppliers know what software you have in operation? Do they run simulated testing of software updates on machines prior to actual implementation? This last question is critical for operational technology environments that don’t regularly shut down and restart for software updates. Further, according to a recent study, only are confident they know the exact number of vendors accessing their systems; the average company’s network is accessed by 89 different vendors every week. An IR plan should incorporate all necessary information about who has access to networks and what role suppliers will play in response to cyber attacks, including how they should be communicated with and how they can help mitigate the risk. If a control system’s human machine interface (HMI) went down as the result of a cyber attack in an exploration and production (E&P) operation, which systems could continue to operate despite the handicapped position of the industrial automation system? When distributed control systems (DCS) are down, an organization can operate machinery from the control panel. This isn’t a simple solution, however, and leadership must consider whether there is an operator on every shift who is qualified to operate a generator control through the panel and not through the DCS. Organizations must also consider whether there is something they might lose that requires connectivity to operate, causing disruption to operations. Running through scenarios as part of an IR exercise will help companies determine what type of operational flexibility and resiliency they have, and what steps they must take to improve upon it. Attacks are part of today’s connected environment, so IR is not as much about the attack but rather resiliency. Cybersecurity practices need to be collaborative and open, not only within an organization but across industries. Executives should be thinking about how they inventory assets and what type of services they would require from manufacturers to deal with a cyber incident. They must communicate a clear picture to the board of what is required and how this plan will be executed efficiently. Running through an IR exercise helps raise awareness about cybersecurity within an organization and creates a resilient business culture that is prepared for anything.
|
Playboy pitches advertisers on its nudity-free future
|
Anthony Ha
| 2,016
| 5
| 13
|
looked both backwards and forwards during its first today, where it pitched advertisers on its plans for what CEO Scott Flanders described as “the new, multi-platform Playboy.” Now, when I say “Playboy event,” you’ve probably got some preconceived ideas about what’s involved. Turns out, however, that it wasn’t all that different from plenty of other corporate digital media presentations, with lots of talk about branded content, millennials and “brand ambassadors.” Which makes sense, since Playboy took nudes off its website in 2014, and . (To be clear, it still includes pictures of attractive women, but now those pictures are a bit more safe for work.) According to the Playboy executives who spoke onstage, the change has helped the website grow by 400 percent and also lowered the average age of the reader from 47 to 31 years old. In some ways, it seems like Playboy wants to be the next Vice — programs discussed at NewFronts included lots of documentaries and lifestyle programming aimed at the aforementioned millennials. For example, there’s , a documentary series from Yoonj Kim, and House of Waris, a dinner-and-discussion show hosted by Waris Ahluwalia with guests like actress Natasha Lyonne and Sonic Youth’s Kim Gordon. The company also announced that it’s forming a division called Playboy Studios, which will focus on creating marketing content for brands. (It seems like needs its own content studio these days.) But without the nude centerfolds for which the magazine was best known, what sets Playboy apart in 2016? Ahluwalia said Playboy aims to be a wingman of sorts, helping men navigate “living the good life.” He also said that the publication is defined by three broad principles — progress, freedom and exploration. And the speakers consistently emphasized the idea that Playboy has had more to offer than sex appeal. After all, the magazine has throughout its history. Sex wasn’t completely absent from the presentations — sometimes it was mentioned playfully, like when Ahluwalia described branded content as “the proverbial aphrodisiac of marketing.” And sometimes it was more substantial, like in the upcoming Sex and Relationships Index that aims to “reboot ‘s work” and paint an updated picture of how millennials look at these issues. Oh, and those brand ambassadors? They were actually Playboy Playmates, who stayed off-stage during the presentation but came out to mingle at the end. And for all the challenges that Playboy faces in reinventing itself, Head of Branded Content Hugh Garvey noted that the publication still has 97 percent unaided global brand awareness — in other words, 97 percent of the global population knows what Playboy is. “We wrote the story of the good life once, and now we’re doing it again,” he said.
|
Acacia soars 35% in second tech IPO of the year
|
Katie Roof
| 2,016
| 5
| 13
|
But Acacia Communications listed on the public markets today and outperformed expectations. After pricing its offering at the top of the range at $23 per share and raising $103.5 million, the company soared to above $30 in its first day of trading, yielding a market cap of above $1 billion. “The IPO market has been pretty stagnant and has been a difficult environment,” Acacia listed on the Nasdaq under the ticker “ .”
|
GearVR update brings much-needed native viewcapture tool to compatible phones
|
Devin Coldewey
| 2,016
| 5
| 13
|
Here’s the thing: when your eyes are synthesizing information from multiple screens into a three-dimensional virtual space, the word “screenshot” fails to describe what it is doing almost as badly as it fails to document what you are experiencing. So I’m calling it viewcapture. So easy Another, separate problem is that VR headsets and platforms right now don’t offer much in the way of viewcapping options out of the box; there are third-party solutions, of course, but ideally one should be able to make a record of the VR experience as easily as I can screenshot these words. It’s not quite there yet, but Oculus has pushed out an update ( ) to GearVR-compatible phones that at least gets the ball rolling. Users will now find capture tools in the “Utilities” section of the universal menu (accessed by holding the back button, but you knew that). Natively captured shots will be humble single-eye, 1024×1024 affairs, and you’ll need to record audio separately for video — so for some, this built-in tool may not be sufficient. But hey, it’s a budget solution for budget VR. And it’ll only get better. This isn’t a brand-new feature: developers have had access to it, and it’s officially been on the update roadmap for months. But the 1.17.7 patch makes it generally available. The big Catch-22 with spreading VR content remains, however: you can’t try it without a headset, and no one wants a headset who hasn’t tried it. That’ll resolve itself in time, however, as the technology filters down from the early adopters — or rather, filters from the many basements converted into VR caves.
|
Arizona’s Governor Ducey signs SB 1350 into law, prohibiting the ban of short-term rentals
|
Stefan Etienne
| 2,016
| 5
| 13
|
Governor Doug Ducey of Arizona , a law which prohibits cities and municipalities from banning the listing and use of short-term rentals like Airbnb, HomeAway, and others. Signing the law isn’t solely about gaining political points, it’s also another step in the Arizona governor’s stated that encourages a sharing economy. From the governor’s perspective, the law helps travelers who, instead of turning to hotel chains, can inject the profits of tourism directly into local economy by paying locals. It’s a win-win situation, essentially. Advocates for the lobbying group the Travel Technology Association support the new law. In a statement, Matt Kiessling, leader of short-term rental policy for the Travel Technology Association (a trade association with members ranging from Airbnb to Expedia), about the ratification of SB 1350: “With SB 1350, Arizona has proven itself to be forward-thinking when it comes to public policy, willing to embrace the peer-to-peer economy while also balancing the interests of all stakeholders,” and that, “This bill truly is a win for everyone – it ensures that short-term rentals remain an option for travelers to Arizona and provides enormous economic benefits to local communities, while streamlining the collection of tax revenue.” So far, this is one of the first major steps towards protecting, and essentially, promoting the use of short-term rentals in the U.S., while also laying out some framework should other states decide to follow suit.
|
Three ways tech is reinventing a surprising sector
|
Kevin Barenblat
| 2,016
| 5
| 13
|
More than 1 million users. Sixteen billion monthly page views. Ninety-nine percent user engagement. If these sound like stats from companies, you’re right. Except these companies are nonprofits. In Steve Case’s new book, , he argues that this next generation of the Internet will transform major industry sectors and become integrated into everything we do. We are already seeing the tremendous impact of software and the Internet transform one of the least discussed areas — the nonprofit . With the growing ubiquity of mobile phones and the Internet, combined with plummeting connectivity costs and, in some cases, free infrastructure, it’s not that organizations focused on scaling impact are integrating technology into the core of their solution. But you may be surprised to hear how deep technology’s impact has already been on the nonprofit . Medic Mobile Traditional nonprofits often run into barriers when trying to grow quickly. Because of its ubiquity and ability to connect people, technology provides the power to scale. , a communication and support platform for health workers in hard-to-reach communities around the world, was born when founder Josh Nesbit noticed he had stronger cell reception in a rural African village than he did on Stanford’s campus, where he was a student. Nesbit had his “a-ha” moment when he realized remote community health workers could harness brand new mobile infrastructure to connect with clinics. With cell phones getting cheaper, more affordable pre-paid data plans and increased mobile coverage, he realized he could change medical care in remote areas. Medic Mobile arms more than with a mobile app that allows them to register pregnancies, track disease outbreaks, communicate about emergencies and keep inventory of critical medicines. By capitalizing on mobile connectivity, Medic Mobile has been able to improve healthcare for more than 8 million people in 23 countries in just 5 years. In addition to technology’s ubiquity, crowdsourcing enables an ability to scale previously missing in the nonprofit . Wikipedia’s crowdsourced model allowed it to expand tremendously in a relatively short period of time; it is now one of the strongest content resources on the Internet with . also utilizes crowdsourcing, but to connect low-income high school students to professional mentors. Its online platform allows 10,000 professionals with expertise from companies like and to answer students’ career questions. According to CareerVillage.org founder Jared Chung, a crowdsourcing model allowed the organization to completely remove all the traditional barriers to scalability for mentorship programs. Chief among those were the huge commitments volunteers need to make to participate in 1:1 mentorship programs. With a paid staff of only three, CareerVillage.org has provided more than 1 million students advice on how to start navigating their career path, and mentor engagement is strong, with a 99.9 percent answer rate. CareMessage organizations that scale well typically have one of a few business models, such as Software as a Service (SaaS) companies (i.e. ) or marketplace businesses ( , and ). We’re now seeing business models infuse the nonprofit , enabling these organizations to run like companies. The SaaS delivery model works well for nonprofits, as the marginal cost of helping an additional person is near zero. For example, provides a platform for healthcare organizations to streamline care and strengthen self-health management. The nonprofit built its messaging and cloud platform to address the need for greater health literacy among low-income communities. Compared to the time and cost of individually educating patients on prevention and post-treatment techniques, CareMessage’s solution enables health centers to treat their patients more cost-effectively. We’re seeing online marketplaces blossom in the nonprofit as well. is an online marketplace for the $5 billion in unexpired, perfectly good medication that would otherwise be destroyed each year. SIRUM’s marketplace allows health facilities, manufacturers, wholesalers and pharmacies to donate unused drugs for cheap redistribution, targeting the 50 million Americans who could not otherwise afford these medicines. The SIRUM marketplace has already helped redistribute $5 million in medicines, providing enough medicine for almost 150,000 people who otherwise could not afford it. Khan Academy In the old days, startups required large teams and $1 million budgets to get started. But now, the same open-source tools and inexpensive building blocks that enable small, scrappy startup teams also enable a new breed of nonprofit. As philanthropy constantly searches for optimal impact, nonprofits would be non-starters, stuck in a loop: proving impact requires a product, building a product requires money, money requires proving impact, which requires a product. Today’s building blocks enable anyone to get started for close to free. began as a series of videos. With hosting provided by Google, Sal Khan was able to keep his costs low while growing his initial following for years before focusing on fundraising for Khan Academy. Even today, with , Khan Academy can cost-effectively scale because of YouTube’s infrastructure. While in her first year at Stanford Graduate School of Business, Heejae Lim founded , which encourages student achievement by bringing classroom feedback and learning into the home through parents. With $5,000 of her student loan money and low-cost infrastructure tools like , Google Voice and , Lim was able to build a multilingual communications tool connecting teachers and families. These tools allowed her to launch her initial product much quicker than if she had built each component from scratch, which would require a larger development team and additional funding. Lim says building software by leveraging existing tools and integrating “old-school” SMS messages was critical in allowing TalkingPoints to reach its target demographic. innovations, largely developed to address business problems, are now how nonprofits address social problems. enables nonprofits to scale faster, adopt new business models and build upon the work of others. Right now I see the nonprofit at the beginning of a dramatic change. Expect a future where all nonprofits leverage to maximize impact, addressing social issues where never before played a role.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.